Re: mysql behaviour

2009-03-10 Thread Miguel
aparently, it says that the error log is in the syslog file but here I
didnt find any errors.

I will cofigure an exclusive log file for the slow connections and the
errors.

Thank you very much for your help.

best regards,
Miguel

On Tue, 2009-03-10 at 00:41 +0100, Claudio Nanni wrote:

 I do not know how debian works,
 but in any case it is not even 'thinkable' a MySQL instance without 
 being able to read the error log,
 you should at least be able to read MySQL server output (error log) 
 before going any further.
 
 Dealing with the status you could reset the status and start monitoring 
 the status variables from now on to see the behavior,
 but again, all the possible informations are needed to debug a supposed 
 mysql performance issue.
 
 If you can do it, configure the exclusive error log file and restart the 
 server.
 
 Cheers
 
 Claudio
 
 
 Miguel wrote:
  i m not lucky, the server has not an exclusive error log. It is not 
  configure. It says:  Error logging goes to syslog. This is a Debian 
  improvement :)
 
  but I can not see anything clear in syslog.
 
 
 
 
  On Mon, 2009-03-09 at 23:20 +0100, Claudio Nanni wrote:
  uld be
  just a Django problem (does not close correctly connections), 
 
 


Re: mysql behaviour

2009-03-10 Thread Claudio Nanni
Hi Miguel,
I think it would also be very useful to watch the django error log,
just to be check any clue for the aborted_clients, that is clients which
connections was closed not gracefully.

De nada!

Claudio



2009/3/10 Miguel m...@moviquity.com

  aparently, it says that the error log is in the syslog file but here I
 didnt find any errors.

 I will cofigure an exclusive log file for the slow connections and the
 errors.

 Thank you very much for your help.

 best regards,
 Miguel


 On Tue, 2009-03-10 at 00:41 +0100, Claudio Nanni wrote:

 I do not know how debian works,
 but in any case it is not even 'thinkable' a MySQL instance without
 being able to read the error log,
 you should at least be able to read MySQL server output (error log)
 before going any further.

 Dealing with the status you could reset the status and start monitoring
 the status variables from now on to see the behavior,
 but again, all the possible informations are needed to debug a supposed
 mysql performance issue.

 If you can do it, configure the exclusive error log file and restart the
 server.

 Cheers

 Claudio


 Miguel wrote:
  i m not lucky, the server has not an exclusive error log. It is not
  configure. It says:  Error logging goes to syslog. This is a Debian
  improvement :)
 
  but I can not see anything clear in syslog.
 
 
 
 
  On Mon, 2009-03-09 at 23:20 +0100, Claudio Nanni wrote:
  uld be
  just a Django problem (does not close correctly connections),






Question about the code in check_routine_access?

2009-03-10 Thread Tianwei
Hi, all,  I don't know if this is the right list to ask such naive question.
I am analyzing some mysql code for my work. For the following code:
sql/sql_parser.c:5247
bool
check_routine_access(THD *thd, ulong want_access,char *db, char *name,
 bool is_proc, bool no_errors)
{
  TABLE_LIST tables[1];

  bzero((char *)tables, sizeof(TABLE_LIST));
  tables-db= db;
  tables-table_name= tables-alias= name;

.
}

For tables, Why we use an array, not as the following:
TABLE_LIST tables, * tables_p;
tables_p = tables;

I know that they are equivalent,  but anyone can explain why we use the
former? Are there any special rules or I misunderstand the code?


Thanks very much


Tianwei


Message could not be delivered

2009-03-10 Thread menaggiohostel
óÁ¬FzÒ*ÀÛá5ÆÚö؃nMӽ̈́™)FéoêiìŽÜ*ýøZÕÆ°1è—DŽ–!ã•b3_Åê·Yè]Ý-ìgE^x¿7u]É0Îãü`~Ö!ònpáÈܞBôAŸo÷1×µdA­ô(Ràíë»á7ÀÃóÇÔbÊmÃËr½ëôo­µý©‡ws$R¨Ðێ*éa#õ7ä¸ðŖ-Z*êj«*:£‹ìê͖–rnýè„×—d¹][Ð)#—K¹¿þþŠèØ羐¹ÖV»È)Zà¶s7£Ñoϱíóï Y²ìM}¡d\`ÎØJgИêîÔ(ßÂdœ`–
 
;Ï¿j#h…òóhò—ïj«ºÛ”ÑЃh(£In“.ïʖT”•p±âžÀO2W½}VvŠf¸°~žQ.ÇüPas¸k°Œi,eۇÓ:ù/˜”TBÕ9üàUæ‡Ñ~FM/,¹6JÅlÙ.o¦)mu»¬2–°ƒ«J\‘#H'wI*aDaIbé´±Ù Q÷›3LrÇS×#F[xÛ%‡S:NC‚·¼Jù×½Fh“¸©Fp/(l1ü//]ÒؓbR)ÕfKWa¼®O/^Y
 ‰3ÈVBAo†„^Û®
¦·ä
ZÃh:0§¡®¸÷À¥P•Åº¼ZåCëõ•ß1¼üµ¿RJmȐ”!þtƒ§ÜÊ;CÑq“Ý˜úù“PVú
ßþùc¼ô6‚™—6i1±ÍëŽØJ,ƒ‡ô§jÜÕ¼Ÿ|¢D0ÛÏi˜{a±ÔœÝ³òÑaNòÉÇ$ˆÏ½J{åCË'.$úèÉø®ž8s}ru¹Eԏúü¶R
̌Ù
ñôüA,ú~$5n™Lb
#9MAµU 8ÍOÀHœU®žÓOvR OJójJ·ë’Kß0­IHcV|dÑJv”
ƒEÀyŠýH¶-µ1÷^[-.h£•¬*mXŒC‹-7YÁŽ½¸çC590'³dw'œw‹à6«-oÇâæ‡øyb};ŠOÚÌA?¾¿Phˆ{T÷RÕyaÔ¿y»ËäØó»E÷jCãCÄ/ûCL»Àcµƒ¡c}p„!¡kŒ{cù³õcqrؐ ªˆ¬^
%U*ã¯tbz_!ƒBJçXƒÙވ‡,™XÍ#œq¤tMr¬È¨/[»0yè)2¤Lª¥Ìä‰,¸ž 
c-º›u4«5ËeV±¨Œ¾‹±þö¨øˆö¶È¾õ„­×\§èu‡
؎zÓìñӃ7ƒŽ²§B :qHËCᖝ•‹¡Âd:00Åä?hÖI·zåI³(Zôtmá³[N¢ÚJâJòÌô•
¾*AššŒc¤ØAä±úœ‹ŸÍ7»á]ÑJíŋôûÓ/þÍiQ˜rË{X¿X-L‚ù½vӃ¥'‹îڂ¸Ñ?k(6¦žm44¾íÌS—'gÍ?^d¡šL‘²`°Ò˜p‰ñʯ‘˜ÞEµ¬ÕFƒ'íiðÆ
»—miLۛq7ºXª¶Dد¡þ



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org

Re: When will MySQL support array datatype?

2009-03-10 Thread Moon's Father
Thanks for your fast reply.
Then only temporary table can simulate array datatype.

On Mon, Feb 16, 2009 at 3:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Complex datatypes are not compatible with the concept of relational
 databases,
 probably you want to refer to an Object-Oriented DBMS or Object-Relational
 DBMS.

 Cheers
 Claudio Nanni


 Moon's Father wrote:

 Hi.
   Who could tell me when the MySQL support array datatype?
 Any reply will be appreciated.







-- 
I'm a MySQL DBA in china.
More about me just visit here:
http://yueliangdao0608.cublog.cn


RE: InnoDB deadlocks

2009-03-10 Thread Jerry Schwartz
-Original Message-
From: Paul McCullagh [mailto:paul.mccull...@primebase.com]
Sent: Monday, March 09, 2009 6:34 PM
To: Mattia Merzi
Cc: MySql
Subject: Re: InnoDB deadlocks

Hi Mattia,

On Mar 9, 2009, at 6:21 PM, Mattia Merzi wrote:

 Hi everyone,

 I've got some problems with deadlocks on InnoDB tables.
 On paragraph 13.6.8.10. How to Cope with Deadlocks
 of the mysql 5.1 version, the last sentence states:
 --
 Another way to serialize transactions is to create an auxiliary
 semaphore table that contains just a single row.
 Have each transaction update that row before accessing other tables.
 In that way, all transactions happen in a serial fashion.
 Note that the InnoDB instant deadlock detection algorithm also works
 in this case, because the serializing lock is a row-level lock.
 With MySQL table-level locks, the timeout method must be used to
 resolve deadlocks.
 --

 Just two very simple questions:
 - using this method, transactions will be serialized so the deadlock
 problem will never come up again?

Yes.

But transactions will also no longer run in parallel which will reduce
the throughput of the server if it is accessed by multiple clients.

[JS] There is no free lunch, but sometimes you get a free appetizer. Within
limits, you will get better throughput if you have multiple transactions
running in parallel rather than running them serially. The problem is to
determine those limits. If you have the luxury, you run stress tests and
examine the queue lengths for the various bits: disk, memory, cpu, network.
That will give you some idea of what your system can tolerate, as well as
telling you where to put your money. In reality, most of us run stress tests
during production. :(

Of the various resources, memory is the one with the sharpest knee in the
curve because either you have enough or you don't. If you have enough
memory, then more will not help.

Remember, there is always exactly one bottleneck in a system at any given
moment.

By the way, if transactions are constantly presented to a resource faster
than the resource can service, the queue length will grow to infinity. That
will cause performance problems. ;)


  This seems clear reading that sentence, the only thing that makes me
 humble is the statement:
  Note that the InnoDB instant deadlock detection algorithm also
 works in this case ... can someone
  briefly explain me this concept?
 - if I create a semaphore table and I start any deadlock-prone
 transaction issuing a lock table  write
  on that table and an unlock tables immediately after the commit,
 will the effect be the same?

Yes, this will work the same.

  'Cause the last sentence of the manual says:
  With MySQL table-level locks, the timeout method must be used to
 resolve deadlocks

This is true, but is only a problem if deadlocks are possible.
However, deadlocks are not possible if you start every transaction
with a lock table  write.

  will this mean that if I use this LOCK TABLE method I can get
 timeouts instead of deadlocks
  on the very same transactions?

Yes, this can happen. But, only if a deadlock is possible. By
exclusive locking a single resource (a row or a table), at the start
of each transaction, you explicitly make deadlocks impossible.

However, it is recommended to UPDATE a single row in the new table,
instead of using lock table  write.

This has the same affect, but with the benefit that deadlock detection
will still work in the case that you do not modify all transactions as
suggested.

This might happen because:

- your code is vast, and you miss one, or
- you add a new transaction and forget to add the exclusive locking
UPDATE, or
- you leave out certain transaction on purpose because you have never
had deadlocking problems with them before.

All good reasons not to serialize all transactions, and therefore a
good reason to use the method that continues to support deadlock
detection.

Best regards,

Paul


--
Paul McCullagh
PrimeBase Technologies
www.primebase.org
www.blobstreaming.org
pbxt.blogspot.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=jschwa...@the-
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB deadlocks

2009-03-10 Thread Mattia Merzi
Hi there,

well, thanks for the hints regarding transaction-serialization
performance but, if you read my very first e-mail, I didn't mention
any kind of performance trouble, I just sometimes (once a *month*)
have to re-issue some db commands because of these
deadlocks, but 99.9% of the time I have free cpu, free memory and
free disk IO resources so, fortunately, performace is not
a problem, just some deadlocks, and I just want to be sure that
everything works as it should, even in that 0.1% of the time that
the database is heavily used.
Additionally, the database that causes me this kind of troubles
is a database dedicated to batch jobs, so in any case nobody
will be angry if the jobs finish couple of minutes later than usual ...:)

It's something like: 23.5 hours a day data is loaded, 0.5 hours a day
some clients run in parallel and call some stored procedures
that can run in a serial fashion without causing any kind of trouble.

Thank you anyway, any e-mail is really appreciated, even if
performance is not a problem for me... and I hope that this sentence
will not make this e-mail be considered as spam  :D

Greetings, and thanks again, Jerry!

Thanks even to Paul for the really-tiny-but-really-appreciated
answers to my questions!

Mattia Merzi.

2009/3/10 Jerry Schwartz jschwa...@the-infoshop.com:
[...]
 [JS] There is no free lunch, but sometimes you get a free appetizer. Within
 limits, you will get better throughput if you have multiple transactions
 running in parallel rather than running them serially. The problem is to
 determine those limits. If you have the luxury, you run stress tests and
 examine the queue lengths for the various bits: disk, memory, cpu, network.
 That will give you some idea of what your system can tolerate, as well as
 telling you where to put your money. In reality, most of us run stress tests
 during production. :(

 Of the various resources, memory is the one with the sharpest knee in the
 curve because either you have enough or you don't. If you have enough
 memory, then more will not help.

 Remember, there is always exactly one bottleneck in a system at any given
 moment.

 By the way, if transactions are constantly presented to a resource faster
 than the resource can service, the queue length will grow to infinity. That
 will cause performance problems. ;)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: When will MySQL support array datatype?

2009-03-10 Thread Darryle Steplight
Moon,
 I'm not sure exactly what you are trying to do, but why don't you
just serialize() or json_encode() your data into a column?

On Tue, Mar 10, 2009 at 9:35 AM, Moon's Father
yueliangdao0...@gmail.com wrote:
 Thanks for your fast reply.
 Then only temporary table can simulate array datatype.

 On Mon, Feb 16, 2009 at 3:12 AM, Claudio Nanni claudio.na...@gmail.comwrote:

 Complex datatypes are not compatible with the concept of relational
 databases,
 probably you want to refer to an Object-Oriented DBMS or Object-Relational
 DBMS.

 Cheers
 Claudio Nanni


 Moon's Father wrote:

 Hi.
   Who could tell me when the MySQL support array datatype?
 Any reply will be appreciated.







 --
 I'm a MySQL DBA in china.
 More about me just visit here:
 http://yueliangdao0608.cublog.cn


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



BSD/OS and 5.1

2009-03-10 Thread Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem
I mean MySQL 5.1

I compile MySQL 5.1.32 and on tests I got

 gmake -k test
cd unittest  gmake test
gmake[1]: Entering directory `/usr/src/contrib/mysql/mysql-5.1.32/unittest'
perl unit.pl run mytap mysys  ../storage/archive ../storage/blackhole 
../storage/csv ../storage/example ../storage/federated ../storage/heap 
../storage/myisam ../storage/myisammrg   ../plugin/daemon_example 
../plugin/fulltext
Running tests: mytap mysys ../storage/archive ../storage/blackhole 
../storage/csv ../storage/example ../storage/federated ../storage/heap 
../storage/myisam ../storage/myisammrg ../plugin/daemon_example 
../plugin/fulltext
mytap/t/basic-t..Useless use of string in void context at -e line 1.
mytap/t/basic-t.. No subtests run
mysys/bitmap-t...Useless use of string in void context at -e line 1.
mysys/bitmap-t... No subtests run
mysys/base64-t...Useless use of string in void context at -e line 1.
mysys/base64-t... No subtests run
mysys/my_atomic-tUseless use of string in void context at -e line 1.
mysys/my_atomic-t No subtests run

Test Summary Report
---
mytap/t/basic-t  (Wstat: 0 Tests: 0 Failed: 0)
  Parse errors: No plan found in TAP output
mysys/bitmap-t   (Wstat: 0 Tests: 0 Failed: 0)
  Parse errors: No plan found in TAP output
mysys/base64-t   (Wstat: 0 Tests: 0 Failed: 0)
  Parse errors: No plan found in TAP output
mysys/my_atomic-t (Wstat: 0 Tests: 0 Failed: 0)
  Parse errors: No plan found in TAP output
Files=4, Tests=0,  1 wallclock secs ( 0.07 usr  0.10 sys +  0.00 cusr  0.10 csys
 =  0.27 CPU)
Result: FAIL
Failed 4/4 test programs. 0/0 subtests failed.
gmake[1]: *** [test] Error 255
gmake[1]: Leaving directory `/usr/src/contrib/mysql/mysql-5.1.32/unittest'
gmake: *** [test-unit] Error 2
cd mysql-test ; \
/usr/bin/perl ./mysql-test-run.pl   --mysqld=--binlog-format=mixed
Logging: ./mysql-test-run.pl  --mysqld=--binlog-format=mixed
090310 10:53:13 [ERROR] Fatal error: Please read Security section of the 
manual to find out how to run mysqld as root!

090310 10:53:13 [ERROR] Aborting

mysql-test-run: *** ERROR: Could not find version of MySQL
gmake: *** [test-ns] Error 1
cd mysql-test ; \
/usr/bin/perl ./mysql-test-run.pl   --ps-protocol 
--mysqld=--binlog-format=row
Logging: ./mysql-test-run.pl  --ps-protocol --mysqld=--binlog-format=row
090310 10:53:14 [ERROR] Fatal error: Please read Security section of the 
manual to find out how to run mysqld as root!

090310 10:53:14 [ERROR] Aborting

mysql-test-run: *** ERROR: Could not find version of MySQL
gmake: *** [test-pr] Error 1
gmake: Target `test' not remade because of errors. 

Is this safe to deploy or are patches going to be needed?

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: When will MySQL support array datatype?

2009-03-10 Thread Andrew Garner
This is part of the SQL Standard.  MySQL has a worklog open on it:
http://forge.mysql.com/worklog/task.php?id=2081

On Sat, Feb 14, 2009 at 9:05 PM, Moon's Father
yueliangdao0...@gmail.com wrote:
 Hi.
   Who could tell me when the MySQL support array datatype?
 Any reply will be appreciated.

 --
 I'm a MySQL DBA in china.
 More about me just visit here:
 http://yueliangdao0608.cublog.cn


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



using a subquery/self-join to recursively retrieve a tree

2009-03-10 Thread Ali, Saqib
Hello,

I have following simplistic DB representing a hierarchy:

++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| uid| int(10)  | NO   | PRI | 0   |   |
| name   | char(80) | YES  | | NULL|   |
| mail   | char(80) | YES  | | NULL|   |
| manageruid | int(10)  | YES  | | NULL|   |
++--+--+-+-+---+

How can I do some recursion to get the UIDs of all the employees
reporting up to a manager, regardless of how deep the tree is. I can
do this usindg LDAP and/or PHP, but not sure how to do it as a mysql
query.

Any thoughts? Thanks

saqib
http://www.capital-punishment.us

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: using a subquery/self-join to recursively retrieve a tree

2009-03-10 Thread Peter Brawley

How can I do some recursion to get the UIDs of all the employees
reporting up to a manager, regardless of how deep the tree is. I can
do this usindg LDAP and/or PHP, but not sure how to do it as a mysql
query.


Examples  discussion at 
http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html


PB

-

Ali, Saqib wrote:

Hello,

I have following simplistic DB representing a hierarchy:

++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| uid| int(10)  | NO   | PRI | 0   |   |
| name   | char(80) | YES  | | NULL|   |
| mail   | char(80) | YES  | | NULL|   |
| manageruid | int(10)  | YES  | | NULL|   |
++--+--+-+-+---+

How can I do some recursion to get the UIDs of all the employees
reporting up to a manager, regardless of how deep the tree is. I can
do this usindg LDAP and/or PHP, but not sure how to do it as a mysql
query.

Any thoughts? Thanks

saqib
http://www.capital-punishment.us

  




No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.0.237 / Virus Database: 270.11.9/1993 - Release Date: 03/10/09 07:19:00


  


mysqlimport remote host problem

2009-03-10 Thread René Fournier
I'm writing script that, each night, copies a small database to my  
laptop on the local network. I'm having trouble getting it to work.  
Here's my syntax so far (run on the server):


mysqlimport --host=192.168.0.224 --user=root --password alba2  
alba2_2009-03-10_00h45m.Tuesday.sql


Which produces:

---
mysqlimport: Error: You have an error in your SQL syntax; check the  
manual that corresponds to your MySQL server version for the right  
syntax to use near '-03-10_00h45m IGNORE 0 LINES' at line 1, when  
using table: alba2_2009-03-10_00h45m

---

The sql file is produced by automysqlbackup...  Not sure what I'm  
missing, probably something obvious. Anyway, here's the first part of  
the sql file I want to import:


-- MySQL Administrator dump 1.4
--
-- --
-- Server version   5.0.67-log


/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;

/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,  
FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE,  
SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;



--
-- Create schema alba2
--

CREATE DATABASE IF NOT EXISTS alba2;
USE alba2;

--
-- Definition of table `alba2`.`accounts`
--

DROP TABLE IF EXISTS `alba2`.`accounts`;
CREATE TABLE  `alba2`.`accounts` (
  `id` smallint(5) unsigned NOT NULL auto_increment,
  `fp` varchar(40) NOT NULL,
  `created` int(10) unsigned NOT NULL default '0',
  `status` enum('Active','Inactive') NOT NULL default 'Active',
  `account_name` varchar(40) NOT NULL,
  `account_full_name` varchar(40) NOT NULL,
  `address` varchar(40) NOT NULL,
  `city` varchar(40) NOT NULL,
  `province` varchar(10) NOT NULL,
  `postcode` varchar(10) NOT NULL,
  UNIQUE KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=10 DEFAULT CHARSET=utf8;

--
-- Dumping data for table `alba2`.`accounts`
--


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Concurrent Inserts with merged table

2009-03-10 Thread buford
 On Sat, Mar 7, 2009 at 12:10 PM,  buf...@biffco.net wrote:
 Another way to find out whether this is the problem (yes, I know, you
 already answered this question ;-) is to set concurrent_insert=2 (see
 http://dev.mysql.com/doc/refman/5.0/en/concurrent-inserts.html ).


No go. Setting concurrent_insert=2 appears to make absolutely no
difference and any scenario, i.e., it didn't even work as claimed for the
situation where there is free space in one or more of the merge table
union set. And it also made no difference for the other situation I was
probing where concurrent selects failed to work after one or more of the
union set constituent tables had been packed and then unpacked (and
repaired, and optimized, and flushed).

Kind of surprizing to think I'm the first to exercise these features.
Can't help but think I'm missing something.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqlimport remote host problem

2009-03-10 Thread René Fournier
OK, I've managed to do the same thing with just the mysql command line  
program:


	mysql -h 192.168.0.224 -u root -p alba2  /Backup/Latest/ 
alba2_2009-03-10_00h45m.Tuesday.sql


Works great. However, the sql file is normally gzipped, so Can I  
ungzip the file on the fly (and without removing the .gzip version)  
and pipe the contents as I did above? (Yes, I'm UNIX-impaired.)   
Something like:


	mysql -h 192.168.0.224 -u root -p alba2  gzip -dc /Backup/Latest/ 
alba2_2009-03-10_00h45m.Tuesday.sql.gz


But so that it works...

...Rene

On 10-Mar-09, at 7:38 PM, René Fournier wrote:

I'm writing script that, each night, copies a small database to my  
laptop on the local network. I'm having trouble getting it to work.  
Here's my syntax so far (run on the server):


mysqlimport --host=192.168.0.224 --user=root --password alba2  
alba2_2009-03-10_00h45m.Tuesday.sql


Which produces:

---
mysqlimport: Error: You have an error in your SQL syntax; check the  
manual that corresponds to your MySQL server version for the right  
syntax to use near '-03-10_00h45m IGNORE 0 LINES' at line 1, when  
using table: alba2_2009-03-10_00h45m

---

The sql file is produced by automysqlbackup...  Not sure what I'm  
missing, probably something obvious. Anyway, here's the first part  
of the sql file I want to import:


-- MySQL Administrator dump 1.4
--
-- --
-- Server version   5.0.67-log


/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;

/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,  
FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE,  
SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;



--
-- Create schema alba2
--

CREATE DATABASE IF NOT EXISTS alba2;
USE alba2;

--
-- Definition of table `alba2`.`accounts`
--

DROP TABLE IF EXISTS `alba2`.`accounts`;
CREATE TABLE  `alba2`.`accounts` (
 `id` smallint(5) unsigned NOT NULL auto_increment,
 `fp` varchar(40) NOT NULL,
 `created` int(10) unsigned NOT NULL default '0',
 `status` enum('Active','Inactive') NOT NULL default 'Active',
 `account_name` varchar(40) NOT NULL,
 `account_full_name` varchar(40) NOT NULL,
 `address` varchar(40) NOT NULL,
 `city` varchar(40) NOT NULL,
 `province` varchar(10) NOT NULL,
 `postcode` varchar(10) NOT NULL,
 UNIQUE KEY `id` (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=10 DEFAULT CHARSET=utf8;

--
-- Dumping data for table `alba2`.`accounts`
--


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub...@renefournier.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqlimport remote host problem

2009-03-10 Thread Darryle Steplight
Hi Rene,
 Just a head's up. You might want to keep your username/password
credentials private.

On Tue, Mar 10, 2009 at 10:16 PM, René Fournier m...@renefournier.com wrote:
 OK, I've managed to do the same thing with just the mysql command line
 program:

        mysql -h 192.168.0.224 -u root -p alba2 
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql

 Works great. However, the sql file is normally gzipped, so Can I ungzip
 the file on the fly (and without removing the .gzip version) and pipe the
 contents as I did above? (Yes, I'm UNIX-impaired.)  Something like:

        mysql -h 192.168.0.224 -u root -p alba2  gzip -dc
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql.gz

 But so that it works...

 ...Rene

 On 10-Mar-09, at 7:38 PM, René Fournier wrote:

 I'm writing script that, each night, copies a small database to my laptop
 on the local network. I'm having trouble getting it to work. Here's my
 syntax so far (run on the server):

 mysqlimport --host=192.168.0.224 --user=root --password alba2
 alba2_2009-03-10_00h45m.Tuesday.sql

 Which produces:

 ---
 mysqlimport: Error: You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near '-03-10_00h45m IGNORE 0 LINES' at line 1, when using table:
 alba2_2009-03-10_00h45m
 ---

 The sql file is produced by automysqlbackup...  Not sure what I'm missing,
 probably something obvious. Anyway, here's the first part of the sql file I
 want to import:

 -- MySQL Administrator dump 1.4
 --
 -- --
 -- Server version       5.0.67-log


 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
 /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
 /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
 /*!40101 SET NAMES utf8 */;

 /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
 /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,
 FOREIGN_KEY_CHECKS=0 */;
 /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
 */;


 --
 -- Create schema alba2
 --

 CREATE DATABASE IF NOT EXISTS alba2;
 USE alba2;

 --
 -- Definition of table `alba2`.`accounts`
 --

 DROP TABLE IF EXISTS `alba2`.`accounts`;
 CREATE TABLE  `alba2`.`accounts` (
  `id` smallint(5) unsigned NOT NULL auto_increment,
  `fp` varchar(40) NOT NULL,
  `created` int(10) unsigned NOT NULL default '0',
  `status` enum('Active','Inactive') NOT NULL default 'Active',
  `account_name` varchar(40) NOT NULL,
  `account_full_name` varchar(40) NOT NULL,
  `address` varchar(40) NOT NULL,
  `city` varchar(40) NOT NULL,
  `province` varchar(10) NOT NULL,
  `postcode` varchar(10) NOT NULL,
  UNIQUE KEY `id` (`id`)
 ) ENGINE=MyISAM AUTO_INCREMENT=10 DEFAULT CHARSET=utf8;

 --
 -- Dumping data for table `alba2`.`accounts`
 --


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub...@renefournier.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqlimport remote host problem

2009-03-10 Thread wultsch
Note the space after the -p , alba2 will be the defaut database *after* he is 
prompted and corrctly give the password for r...@whateverhishostis . 

As he did not give the password, and is not connecting to an that one could get 
to from the net he really has not given out particularly useful info if we had 
ill intents.


Sent from my Verizon Wireless BlackBerry

-Original Message-
From: Darryle Steplight dstepli...@gmail.com

Date: Tue, 10 Mar 2009 22:20:26 
To: René Fournierm...@renefournier.com
Cc: mysqlmysql@lists.mysql.com
Subject: Re: mysqlimport remote host problem


Hi Rene,
 Just a head's up. You might want to keep your username/password
credentials private.

On Tue, Mar 10, 2009 at 10:16 PM, René Fournier m...@renefournier.com wrote:
 OK, I've managed to do the same thing with just the mysql command line
 program:

        mysql -h 192.168.0.224 -u root -p alba2 
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql

 Works great. However, the sql file is normally gzipped, so Can I ungzip
 the file on the fly (and without removing the .gzip version) and pipe the
 contents as I did above? (Yes, I'm UNIX-impaired.)  Something like:

        mysql -h 192.168.0.224 -u root -p alba2  gzip -dc
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql.gz

 But so that it works...

 ...Rene

 On 10-Mar-09, at 7:38 PM, René Fournier wrote:

 I'm writing script that, each night, copies a small database to my laptop
 on the local network. I'm having trouble getting it to work. Here's my
 syntax so far (run on the server):

 mysqlimport --host=192.168.0.224 --user=root --password alba2
 alba2_2009-03-10_00h45m.Tuesday.sql

 Which produces:

 ---
 mysqlimport: Error: You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near '-03-10_00h45m IGNORE 0 LINES' at line 1, when using table:
 alba2_2009-03-10_00h45m
 ---

 The sql file is produced by automysqlbackup...  Not sure what I'm missing,
 probably something obvious. Anyway, here's the first part of the sql file I
 want to import:

 -- MySQL Administrator dump 1.4
 --
 -- --
 -- Server version       5.0.67-log


 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
 /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
 /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
 /*!40101 SET NAMES utf8 */;

 /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
 /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,
 FOREIGN_KEY_CHECKS=0 */;
 /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO'
 */;


 --
 -- Create schema alba2
 --

 CREATE DATABASE IF NOT EXISTS alba2;
 USE alba2;

 --
 -- Definition of table `alba2`.`accounts`
 --

 DROP TABLE IF EXISTS `alba2`.`accounts`;
 CREATE TABLE  `alba2`.`accounts` (
  `id` smallint(5) unsigned NOT NULL auto_increment,
  `fp` varchar(40) NOT NULL,
  `created` int(10) unsigned NOT NULL default '0',
  `status` enum('Active','Inactive') NOT NULL default 'Active',
  `account_name` varchar(40) NOT NULL,
  `account_full_name` varchar(40) NOT NULL,
  `address` varchar(40) NOT NULL,
  `city` varchar(40) NOT NULL,
  `province` varchar(10) NOT NULL,
  `postcode` varchar(10) NOT NULL,
  UNIQUE KEY `id` (`id`)
 ) ENGINE=MyISAM AUTO_INCREMENT=10 DEFAULT CHARSET=utf8;

 --
 -- Dumping data for table `alba2`.`accounts`
 --


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub...@renefournier.com



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=wult...@gmail.com



Re: mysqlimport remote host problem

2009-03-10 Thread Rob Wultsch
On Tue, Mar 10, 2009 at 7:16 PM, René Fournier m...@renefournier.com wrote:

 OK, I've managed to do the same thing with just the mysql command line
 program:

mysql -h 192.168.0.224 -u root -p alba2 
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql

 Works great. However, the sql file is normally gzipped, so Can I ungzip
 the file on the fly (and without removing the .gzip version) and pipe the
 contents as I did above? (Yes, I'm UNIX-impaired.)  Something like:

mysql -h 192.168.0.224 -u root -p alba2  gzip -dc
 /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql.gz

 But so that it works...

 ...Rene


Pipe is your friend. You probably want something like:

gzip -dc /Backup/Latest/alba2_2009-03-10_00h45m.Tuesday.sql.gz
| mysql -h 192.168.0.224 -u root -p alba2