Re: NOW() is stuck...

2013-06-27 Thread Eric Bergen
 on, its networks, applications, services,
 and devices.


 --
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 Sometimes it pays to stay in bed on Monday, rather than spending the
 rest of the week debugging Monday's code.
 - Christopher Thompson

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



 

 Notice: This communication may contain privileged and/or confidential
 information. If you are not the intended recipient, please notify the sender
 by email, and immediately delete the message and any attachments without
 copying or disclosing them. LBI may, for any reason, intercept, access, use,
 and disclose any information that is communicated by or through, or which is
 stored on, its networks, applications, services, and devices.


 --
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 Sometimes it pays to stay in bed on Monday, rather than spending the rest
 of the week debugging Monday's code.
 - Christopher Thompson

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: NOW() is stuck...

2013-06-26 Thread Eric Bergen
This is the expected behavior if you set the timestamp variable in
your session. This is the same mechanism that replication uses to
execute transactions on the slave with the correct time. Setting
timestamp back to default or reopening your connection will fix it.

MariaDB [(none)] set timestamp=1372296737;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)] select now(); select sleep(5); select now();
+-+
| now()   |
+-+
| 2013-06-26 21:32:17 |
+-+
1 row in set (0.00 sec)

+--+
| sleep(5) |
+--+
|0 |
+--+
1 row in set (5.00 sec)

+-+
| now()   |
+-+
| 2013-06-26 21:32:17 |
+-+
1 row in set (0.00 sec)

MariaDB [(none)] set timestamp=default;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)] select now();
+-+
| now()   |
+-+
| 2013-06-26 21:33:53 |
+-+
1 row in set (0.00 sec)

MariaDB [(none)] select now();
+-+
| now()   |
+-+
| 2013-06-26 21:33:54 |
+-+
1 row in set (0.00 sec)

On Wed, Jun 26, 2013 at 4:18 PM, John Meyer
johnme...@pueblocomputing.com wrote:
 Well, if you want to get unstuck in time, maybe you need to call Billy
 Pilgrim ;-)

 Andy Wallace wrote:

 We've been having some issues with one of our MySQL servers lately, and
 currently
 the dang thing is stuck. For at least the last hour, NOW() is returning
 the same
 value:

 mysql select now();
 +-+
 | now()   |
 +-+
 | 2013-06-26 02:27:14 |
 +-+

 The system variable timestamp also has that same time value stored in
 it. How
 can we kick this loose so that the values are more current with real time?
 (it is
 currently 3:08PM here, despite our MySQL instance thinking it's 2am. The
 system
 time on the machine is correct:

 $ date
 Wed Jun 26 15:08:56 PDT 2013


 This is MySQL 5.1.46 running on solaris2.10.

 Any ideas short of restarting the MySQL engine? I'm willing to do that,
 but would much
 rather wait and not do it in the middle of the day.

 Thanks,
 Andy




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Which test suites are valid for --extern option in mysql-test-run?

2012-11-26 Thread Eric Bergen
Those tests are typically used by people developing the mysqld server.
The packages distributed have already had the tests ran on them. I'm
not sure how much value it will provide unless you're modifying
mysqld.

I don't have a great answer but my best guess is that extern can't be
used with tests that start or stop mysqld servers like the replication
tests.

On Sat, Nov 24, 2012 at 12:16 AM, Tianyin Xu t...@cs.ucsd.edu wrote:
 Hi, all,

 I wanna test my running MySQL server. According to the README in the
 mysql-test directory, If you want to use an already running MySQL server
 for specific tests, use the --extern option to mysql-test-run.

 For example, mysql-test-run --extern alias

 But the README also pointed out that some tests cannot run with an
 external server.

 My question is how to know which test suites in t directory are valid for
 --extern option? Without knowing this, I cannot judge whether a test fails
 is because it's not supported or real problems.

 Thanks a lot!
 Tianyin

 --
 Tianyin XU,
 http://cseweb.ucsd.edu/~tixu/



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Why configuration directives are all case sensitive?

2012-11-24 Thread Eric Bergen
Yes, the missing piece of the puzzle is that configuration files are
turned into command line arguments internally. Programs will parse
configuration files and place them at the beginning of the array for
command line arguments. They are made case sensitive because they are
turned into command line arguments. So the basic process is read all
the configuration files in order of precedence into the beginning of
the command line array. Then send this array to the command line
handling code. This is also how argument precedence works. As the
array is walked through options will override previous options so
things passed on the command line are at the end and override options
in the config file.

On Fri, Nov 23, 2012 at 10:08 PM, Tianyin Xu t...@cs.ucsd.edu wrote:
 Dear Eric,

 Thanks a lot for the explanation of argument directives! The concerns are
 very considerate.

 Actually, what I'm curious about is the configuration directives in the
 configuration file, i.e., my.cnf. To my experience, MySQL is the very few
 software who treats these directives in a case sensitive manner.

 The directives in the configuration file is different from the arguments,
 because we don't have something like --port and -P. So, a directive like
 Port clearly indicates that the user specifies port (exactly as you
 said). And this's the reason that most apps do not differentiate them.

 Could you please explain about the directives in the configuration file?

 Thanks a lot!

 Happy Thanksgiving!!
 Tianyin



 On Fri, Nov 23, 2012 at 4:20 PM, Eric Bergen eric.ber...@gmail.com wrote:

 Anger and OS religious arguments the real answer is that is just how
 the option parsing code works. It doesn't always have to make sense.
 There are short and long args to programs. For example on the mysql
 client there is --port or -P and --pasword or -p. The short options
 have to be case sensitive because -P and -p mean different things. The
 short options are case sensitive so the long options may as well be.
 It keeps things simpler. Who wants to write --Port when --port means
 not hitting the shift key?

 There are a few exceptions to this. The option comparison treats _ and
 - as the same. I realize that isn't case but it just shows the
 matching isn't exact. So --show_warnings is valid. On the other side
 of the equal sign comparisons for true, on, false, and off are done
 case insensitive. So --show_warnings=FaLse is valid but
 --show_warningS=TruE isn't.

 If you want to be even more confused consider that mysql allows
 partial argument names. You don't have to type out the full long arg
 as long as you type enough that it only matches one option. For
 example mysql --so is enough to mean socket but mysql --s isn't
 because it can't be distinguished from 'show' variables. This gets
 confusing with things like b. mysql --b is batch mode. So is mysql -B
 but mysql -b is no beep. Confused yet?

 On Sun, Nov 18, 2012 at 6:42 PM, Tianyin Xu t...@cs.ucsd.edu wrote:
  On Sun, Nov 18, 2012 at 6:13 PM, Reindl Harald
  h.rei...@thelounge.netwrote:
 
 
 
  Am 19.11.2012 02:07, schrieb Tianyin Xu:
   You are saying as long as admins are careful, there's no
  misconfiguration?
   But why misconfigurations are so pervasive?
   Simply because the admins are not careful enough?
 
  yes
 
 
  That means not only I'm dummy, and that's means you should take care the
  system configuration design if many people are careless.
 
 
 
   I apologize for my lack of respect. I don't know what's your stuff,
   but
   I guess they'll be more popular if you make them more friendly.
 
  it does not need to be more popular
  it is better not to be too popular but working clean and safe
 
  careless working these days means usually also not care
  about security which is not acceptable htese days and i
  know a lot of crap out there which is more popluar like
  my work but with crappy quality and terrible insecure
 
  see all this CMS sytems out there writing hundrets of
  warnings each request with error_reporting E_STRICT
  while my whole source code runs clean i know who is right
 
  really:
  if you find it useful to complain why a configuration is
  case-sensitive instead accept it and correct your fault
  you are doing the wrong job
 
 
  I'm complaining nothing. I just curious about the configuration and want
  to
  know you developers' thinking. I apologize if I gave you the impression
  of
  complaining by asking questions.
 
  Basically, I'm new to MySQL and find MySQL really take care about lots
  of
  things to give users an easy job. For example, the unit, the enumeration
  options, all are case insensitive -- 512K and 512k means the same
  size,
  MIXED and mixed means the same option, etc. Having such impression,
  when MySQL tells me 'Port' is unknown, it did take me some time to
  figure
  it out. Maybe simply because all the other servers I used like
  PostgreSQL,
  httpd, etc are case insensitive. That's the whole story, and that's why
  I
  ask

Re: Covering Index without the data file (myisam)

2012-11-24 Thread Eric Bergen
MyISAM can't do this but innodb can. If you change to an innodb table
and define your index as the primary key then row data is clustered
with the primary key. This means there is no additional storage
overhead for the primary key because it is just the row data. This
will break down if you define secondary indexes though because the
primary key is copied for each secondary index.

http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html

On Sat, Nov 24, 2012 at 1:15 PM, Reindl Harald h.rei...@thelounge.net wrote:


 Am 24.11.2012 22:02, schrieb Hank:
 Hello everyone,

  I know this is a longshot, but is there any way to eliminate the MYD
 file for a table that has a full covering index? The index is larger
 than the datafile, since it contains all the records in the datafile,
 plus a second reverse index.  It seems redundant to have to also store
 a MYD file

 simple answer: NO




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Why configuration directives are all case sensitive?

2012-11-23 Thread Eric Bergen
Anger and OS religious arguments the real answer is that is just how
the option parsing code works. It doesn't always have to make sense.
There are short and long args to programs. For example on the mysql
client there is --port or -P and --pasword or -p. The short options
have to be case sensitive because -P and -p mean different things. The
short options are case sensitive so the long options may as well be.
It keeps things simpler. Who wants to write --Port when --port means
not hitting the shift key?

There are a few exceptions to this. The option comparison treats _ and
- as the same. I realize that isn't case but it just shows the
matching isn't exact. So --show_warnings is valid. On the other side
of the equal sign comparisons for true, on, false, and off are done
case insensitive. So --show_warnings=FaLse is valid but
--show_warningS=TruE isn't.

If you want to be even more confused consider that mysql allows
partial argument names. You don't have to type out the full long arg
as long as you type enough that it only matches one option. For
example mysql --so is enough to mean socket but mysql --s isn't
because it can't be distinguished from 'show' variables. This gets
confusing with things like b. mysql --b is batch mode. So is mysql -B
but mysql -b is no beep. Confused yet?

On Sun, Nov 18, 2012 at 6:42 PM, Tianyin Xu t...@cs.ucsd.edu wrote:
 On Sun, Nov 18, 2012 at 6:13 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 19.11.2012 02:07, schrieb Tianyin Xu:
  You are saying as long as admins are careful, there's no
 misconfiguration?
  But why misconfigurations are so pervasive?
  Simply because the admins are not careful enough?

 yes


 That means not only I'm dummy, and that's means you should take care the
 system configuration design if many people are careless.



  I apologize for my lack of respect. I don't know what's your stuff, but
  I guess they'll be more popular if you make them more friendly.

 it does not need to be more popular
 it is better not to be too popular but working clean and safe

 careless working these days means usually also not care
 about security which is not acceptable htese days and i
 know a lot of crap out there which is more popluar like
 my work but with crappy quality and terrible insecure

 see all this CMS sytems out there writing hundrets of
 warnings each request with error_reporting E_STRICT
 while my whole source code runs clean i know who is right

 really:
 if you find it useful to complain why a configuration is
 case-sensitive instead accept it and correct your fault
 you are doing the wrong job


 I'm complaining nothing. I just curious about the configuration and want to
 know you developers' thinking. I apologize if I gave you the impression of
 complaining by asking questions.

 Basically, I'm new to MySQL and find MySQL really take care about lots of
 things to give users an easy job. For example, the unit, the enumeration
 options, all are case insensitive -- 512K and 512k means the same size,
 MIXED and mixed means the same option, etc. Having such impression,
 when MySQL tells me 'Port' is unknown, it did take me some time to figure
 it out. Maybe simply because all the other servers I used like PostgreSQL,
 httpd, etc are case insensitive. That's the whole story, and that's why I
 ask on the forum, being curious about the reason.

 It's fine that you told me it's simply because you developers assume nobody
 came to the idea write options
 not EXACTLY like they are in the documentation, so you simply do not want
 to do it. But I do not buy this, because MySQL developers do take care a
 lot of things (as unit and options I mentioned).

 T


 --
 Tianyin XU,
 http://cseweb.ucsd.edu/~tixu/



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: get a partial dump

2012-10-23 Thread Eric Bergen
mysqldump has a --where argument that allows you to pass in a where clause.

On Thursday, October 18, 2012, Stefan Kuhn wrote:

 Hi everybody,
 I want to get insert statements from a mysql database for a number of rows
 of
 a table, specified by a where clause. I also need the inserts for the
 datasets linked via foreign keys in other tables. So I need a sort of
 partial
 dump, a bit like mysqldump, but restricted to a (small) set of data. This
 needs to be done in a Java program, using mysql via jdbc.
 Does anybody know an easy way to do this? Of course I could build the
 statements in the java code, but I thought mysql might offer that or at
 least
 parts of it. Does anybody have an idea?
 Thanks,
 Stefan

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net


Re: Can I measure the use of index?

2012-10-15 Thread Eric Bergen
For the record mariadb also has table and index statistics. Including
statistics on temporary tables.

On Mon, Oct 15, 2012 at 8:34 PM, Lixun Peng pengli...@gmail.com wrote:
 Hi,

 If you are using Percona Server, you can use this query:

 SELECT DISTINCT s.table_schema,
 s.table_name,
 s.index_name
 FROM   information_schema.statistics `s`
LEFT JOIN information_schema.index_statistics indxs
  ON ( s.table_schema = indxs.table_schema
   AND s.table_name = indxs.table_name
   AND s.index_name = indxs.index_name )
 WHERE  indxs.table_schema IS NULL
AND s.index_name NOT IN ( 'PRIMARY' ) ;

 It will display all indexes that not use.

 or this query:

 SELECT table_name,
index_name,
SUM(rows_read)
 FROM   information_schema.index_statistics
 GROUP  BY table_name,
   index_name;

 you can get the all indexes are using.

 On Tue, Oct 16, 2012 at 12:44 AM, Perrin Harkins per...@elem.com wrote:

 On Mon, Oct 15, 2012 at 11:24 AM, Carlos Eduardo Caldi
 ce_ca...@hotmail.com wrote:
  Somebody knows how can I log or measure the index use ?

 http://www.percona.com/doc/percona-toolkit/2.1/pt-index-usage.html

 - Perrin

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
 Senior MySQL Developer @ Taobao.com

 Mobile Phone: +86 18658156856 (Hangzhou)
 Gtalk: penglixun(at)gmail.com
 Twitter: http://www.twitter.com/plinux
 Blog: http://www.penglixun.com



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication rings/maatkit (was: HA Scalability w MySQL + SAN + VMWare)

2012-04-02 Thread Eric Bergen
I suggest buying a copy of High Performance MySQL 3rd edition that
just came out. It has a whole chapter on HA that answers a lot of
these questions.

On Mon, Apr 2, 2012 at 6:14 PM, Wes Modes wmo...@ucsc.edu wrote:

 Replication rings are possible but you must design your application to
 take special care to NOT update the same row in multiple nodes of the
 ring at the same time. This is even harder to design and code for than
 splitting writes/reads to master/slaves.

 Also the loss of one node of a replication ring is not as easy to
 recover from as simply promoting one slave to become the new master of
 a replication tree (demoting the recovered former-master to become yet
 another slave) as there may be pending events in the relay logs of the
 lost node that have not yet been relayed to the downstream node.

 I may not have every answer, but I have seen nearly every kind of
 failure.  Everyone else is encouraged to add their views to the
 discussion.


 Has anyone used maatkit or Percona to setup circular replication?  How
 does it affect this system's reliability and robustness?  Do the tools
 help to deal with fail over?

 W.

 --
 Wes Modes
 Systems Designer, Developer, and Administrator
 University Library ITS
 University of California, Santa Cruz


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: 4 minute slow on select count(*) from table - myisam type

2011-10-03 Thread Eric Bergen
Can you run show processlist in another connection while the select
count(*) query is running and say what the state column is?

On Mon, Oct 3, 2011 at 7:00 AM, Joey L mjh2...@gmail.com wrote:
 this is not a real query on the site - it is just a way i am measuring
 performance on mysql - I do not know if it is such a great way to test.
 Looking for a better way to get a performance read on my site...do you have
 any ?? besides just viewing pages on it.
 thanks
 mjh


 On Mon, Oct 3, 2011 at 9:58 AM, Andrés Tello mr.crip...@gmail.com wrote:

 have you tried

 select count(yourindex) instead of select count(*) ?


 On Mon, Oct 3, 2011 at 7:53 AM, Joey L mjh2...@gmail.com wrote:

 Thanks for the input -
 1. I will wait 48 hours and see what happens.
 2. can you tell me what are some performance tests I can do to help me
 better tune my server ?
 3. I am concerned about this table : | w6h8a_sh404sef_urls
 |
 MyISAM |      10 | Dynamic    | 8908402 |            174 |  1551178184 |
  281474976710655 |   2410850304 |         0 |        8908777 | 2011-09-22
 11:16:03 | 2011-10-02 21:17:20 | 2011-10-02 10:12:04 | utf8_general_ci   |
  NULL |                |                                   |
 what can I do to make it run faster - i did not write the code...but need
 to
 optimize server to handle this table when it gets larger.  It is used for
 url re-writes - so it has a lot of urls.
 thanks
 mjh

 On Mon, Oct 3, 2011 at 12:38 AM, Bruce Ferrell bferr...@baywinds.org
 wrote:

 
  The meaning is:
 
  increase max_connections
  reduce wait_timeout
  -- 28800 is wait 8 hours before closing out dead connections
  same for interactive_timeout
 
 
  increase key_buffer_size ( 7.8G) increase join_buffer_size
  -- This keeps mysql from having to run to disk constantly for keys
  -- Key buffer size / total MyISAM indexes: 256.0M/7.8G
  -- You have a key buffer of 256M and 7.8G of keys
 
  join_buffer_size ( 128.0K, or always use indexes with joins)
  Joins performed without indexes: 23576 of 744k queries.
  -- You probably want to look at the slow query log.  Generalize the
 queries
  and the do an explain on the query.  I have seen instances where a query
 I
  thought was using an index wasn't and I had to re-write... with help
 from
  this list :-)  Thanks gang!
 
 
  increase tmp_table_size ( 16M)
  increase max_heap_table_size ( 16M)
  -- When making adjustments, make tmp_table_size/max_heap_table_size
 equal
 
  increase table_cache (  1k )
  -- Table cache hit rate: 7% (1K open / 14K opened)
  -- Increase table_cache gradually to avoid file descriptor limits
 
  All of the aside, you need to let this run for at least 24 hours. I
  prefer 48 hours.  The first line says mysql has only been running 9
  hours.   You can reset the timeouts interactivly by entering at the
  mysql prompt:
 
  set global wait_timeout=some value
 
  You can do the same for the interactive_timeout.
 
  Setting these values too low will cause long running queries to abort
 
 
  On 10/02/2011 07:02 PM, Joey L wrote:
   Variables to adjust:
    max_connections ( 100)
    wait_timeout ( 28800)
    interactive_timeout ( 28800)
    key_buffer_size ( 7.8G)
    join_buffer_size ( 128.0K, or always use indexes with joins)
    tmp_table_size ( 16M)
    max_heap_table_size ( 16M)
    table_cache ( 1024)
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:    http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com
 
 







-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Restore only one database or one table

2011-05-19 Thread Eric Bergen
The mysql command line has the -o option to only execute queries for
the default database. This can be used to restore one database from a
dump file that contains many. For specific tables you can setup a
restore user that only has permissions on the tables you want to
restore then use the -f flag to continue on error. Only use this in
emergencies though.

On Thursday, May 19, 2011, Adarsh Sharma adarsh.sha...@orkash.com wrote:
 Johan De Meersman wrote:

 - Original Message -


 From: Suresh Kuna sureshkumar...@gmail.com

 Try to take a tab separated dump, so you can restore what ever you
 want in terms of tables or databases.



 Uhh. I'm a bit fuzzy today, but I really don't see how a tab-separated dump 
 will help split off tables or databases :-)


 To answer the original question, though; the technically accurate answer is 
 yes, you can. It's made easy because mysqldump conveniently dumps 
 database-per database and table-per table. It's a bugger to do, however, 
 because if you take a monolithic dump you need to open the whole dumpfile in 
 a text editor and copy the data you want to another file or straight to the 
 MySQL commandline. Good luck with your 250G backup :-)

 You can use sed or awk to look for markers and split the file up that way. 
 You'd be much better off in the future to dump database-per-database, and if 
 you think you need it table-per-table. 's Not all that hard, just script to 
 loop over the output of show databases and show tables. Probably plenty of 
 scripts on the internet that do exactly that.

 Compressing your dumps is a good idea, too - the output is a text file, so 
 bzip2 will probably compress that a factor 10 or better. Simply use bzcat to 
 pipe the file back into the MySQL client to restore.




 That's pretty nice  What I am expected to hear.

 I will let u know after some practical implementation.

 Thanks  best Regards,
 Adarsh Sharma


-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: InnoDB and rsync

2011-01-31 Thread Eric Bergen
I skimmed over this thread and I think I can help clarify the innodb,
rsync, and lvm situation.

The basic issue with just running rsync on the files under a running
mysqld is that the rsync
will copy different parts of files at different points in time. This
means that it could sync things
to disk before and after rsync has read that part of a file which,
when restored, will be corrupted.
This is the worst case scenario.

The next best thing is to run a FLUSH TABLES WITH READ LOCK (global
read lock) and maintain
the lock while the rsync is running. This works fine for myisam tables
because all file activity to
myisam tables is stopped while a global read lock is held.

This isn't guaranteed to work with innodb. Internally to mysql flush
tables with read lock only stops
queries from acquiring write locks which let them modify tables. This
won't make innodb hold still
for a few different reasons.

First a select query in innodb can actually modify data files. A
select on a page with unmerged
records from the insert buffer will cause innodb to merge those
records before making the page
available for selects. This causes some disk i/o. If this happens
while rsync is running the resulting
backup can be corrupted. So even while holding a global read lock and
only running selects innodb
can write data.

The master thread(s) perform background tasks such as flushing dirty
pages, merging the insert buffer
and purging old records whenever innodb feels like there is spare i/o
capacity. These threads don't
know how to hold still during a global read lock and can corrupt a
backup if it were taken with rsync.

The safest way to create a backup without using something like
XtraBackup is to get a snapshot at
the filesystem level or below that at the block device level. This is
effectively what LVM does. When
you create a LVM snapshot it freezes that device at a single point in
time. When you restore the backup
innodb goes through it's recovery procedure as if the power went out.

Some possible solutions to this were discussed on the internals list a
few years ago. I'm not sure what
has been implemented since then. The list thread is here:
http://lists.mysql.com/internals/35527

-Eric

On Fri, Jan 28, 2011 at 1:59 PM, Robinson, Eric eric.robin...@psmnv.com wrote:
 * flush atbles
 * rsync while mysqld is running
 * stop mysqld
 * second rsync


 Unless we can verify 100% that there is a safe way to do it without
 shutting down MySQL, then I'm sure the approach you described above is
 the one we will end up with. Thanks for your input.

 --
 Eric Robinson



 Disclaimer - January 28, 2011
 This email and any files transmitted with it are confidential and intended 
 solely for Reindl Harald,mysql@lists.mysql.com. If you are not the named 
 addressee you should not disseminate, distribute, copy or alter this email. 
 Any views or opinions presented in this email are solely those of the author 
 and might not represent those of Physicians' Managed Care or Physician Select 
 Management. Warning: Although Physicians' Managed Care or Physician Select 
 Management has taken reasonable precautions to ensure no viruses are present 
 in this email, the company cannot accept responsibility for any loss or 
 damage arising from the use of this email or attachments.
 This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is is possible to update a column based on a REGEXP on another column?

2011-01-22 Thread Eric Bergen
There isn't a built in way but you can try 
http://www.mysqludf.org/lib_mysqludf_preg/

I would go with the php/perl script if this is a one time thing.

-Eric

On Jan 21, 2011, at 11:31 AM, Phil pchap...@nc.rr.com wrote:

 I have a table which contains a username column which may be constructed
 something like
 
 somename[A] or [DDD]someothername
 
 The A or DDD can be anything at all.
 
 I've added a new column to the table to which I'd like to populate with the
 value within the square brackets.
 
 I could write something in perl or php to run through each and update them
 but was wondering if there is a way to do this within mysql itself?  The
 regexp only returns a boolean so I can't see how to use that.
 
 Regards
 
 Phil
 
 
 -- 
 Distributed Computing stats
 http://stats.free-dc.org

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Innodb table space questions

2011-01-17 Thread Eric Bergen
reply inline

On Mon, Jan 17, 2011 at 9:30 PM, Angela liu yyll2...@yahoo.com wrote:
 Folks :

 two questions:

 1. can Innodb create per database table space , not per table table space?

No. The only available options are creating a global tablespace which
can be many files or a file per table.

 2. can we store table on specific tablespace like Oracle or DB2 when creating 
 table?

You can only choose to store a table in it's own tablespace or in the
global one.

 Many thanks.







-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: export result from select statement

2011-01-08 Thread Eric Bergen
select into outfile is the correct way. What do you mean by doesn't
work? Does it give you an error?

On Sat, Jan 8, 2011 at 3:04 PM, LAMP l...@afan.net wrote:
 Hi guys,
 I wonder how to store to csv or txt file result from SELECT query? not a
 whole table nor database. Just results from SELECT query.
 Usually I use MySQL Query Browser and Export feature, but in this case I
 don't have access with MySQL Query Browser. Though, I have a command line
 access.
 I found on one place something like
 #SELECT 2+2 into outfile '/path/to/specific/directory/test.csv';
 Though, it doesn't work?!?

 Thanks.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldumpslow

2010-12-18 Thread Eric Bergen
I don't think so. mysqldumpslow parses the log file on the local server.

On Fri, Dec 17, 2010 at 3:01 AM, partha sarathy par...@mafiree.com wrote:
 Is it possible to run mysqldumpslow on a remote host?

 -Partha


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow ALTER TABLE on 70M row InnoDB table

2010-08-21 Thread Eric Bergen
Most alter table operations in 5.0 will rebuild the entire table. The
best thing to increase for alter table speed in innodb is the buffer
pool. For more details on how innodb handles alter table see
http://ebergen.net/wordpress/2007/05/07/how-alter-table-locks-tables-and-handles-transactions/

On Wednesday, August 18, 2010, Xn Nooby xno...@gmail.com wrote:
 From what I have read, ALTER TABLE to add an index causes the entire
 table to be duplicated, so wouldn't my ALTER TABLE command be
 duplicating the work done by the SELECT command?



 On Wed, Aug 18, 2010 at 4:50 PM, mos mo...@fastmail.fm wrote:
 At 02:52 PM 8/18/2010, Xn Nooby wrote:

 Below is a generic version of the code I am trying.  It does copy the
 rows very quickly, but I will have to test to see how quickly the
 indices are built.  Is the below code what you were suggesting?  I had
 a little trouble dropping and later adding the primary index, but I
 think I got it figured out.

 Below I basically do this:
  make the_table_clone from the the_table
  drop the indices on the_table_clone
  copy the row from the_table to the_table_clone
  add the indices back to the_table_clone

 If this runs fast enough, I will then drop the_table, and rename
 the_table_clone to the_table


 USE the_database;


 DROP TABLE IF EXISTS the_table_clone;

 CREATE TABLE the_table_clone LIKE the_table;

 Or you can try something like:

 create table the_table_clone engine=innodb select * from the_table limit=0;
 This will create the same table structure but not the indexes so you don't
 have to drop the indexes below.



 # drop minor indices on clone

 ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col2_col3;

 ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col4_col6;

 ALTER TABLE the_table_clone DROP INDEX IX_the_table_on_col5_col2;


 # drop primary index on clone

 ALTER TABLE the_table_clone CHANGE id id INT UNSIGNED;

 You still need the statement above to change the autoinc to integer if you
 use my Create Table... statement above.



 ALTER TABLE the_table_clone DROP PRIMARY KEY;


 # add 2 new columns to clone

 ALTER TABLE the_table_clone ADD flag_field int(11) DEFAULT '0', ADD
 price_amount float DEFAULT '0';


 # copy rows

 INSERT the_table_clone SELECT *, NULL, NULL FROM the_table LIMIT
 0,1000;

 #INSERT the_table_clone SELECT *, NULL, NULL FROM the_table;

 Why do you have two insert statements? If you are inserting a group of
 records at a time then you need a limit statement on each, and increment the
 offset by the number of rows that have been added.

 I would explicitly specify the column list for both the Insert and the
 Select to make sure they match up. There is no point going through all this
 if it inserts the data into the wrong columns!
 Check the data before creating the indexes to make sure the same number of
 rows have been copied over and the data is in the correct columns.



 # Add back indices in one command (for max speed)

 ALTER TABLE the_table_clone \
  ADD INDEX IX_the_table_on_col2_col3 (col2,col3),\
  ADD INDEX IX_the_table_on_col4_col6 (col4,col6),\
  ADD INDEX IX_the_table_on_col5_col2 (col5,col2),\
  MODIFY id INT SIGNED AUTO_INCREMENT,\
  ADD PRIMARY KEY(col1);

 Correct.

 Mike




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-19 Thread Eric Bergen
Google oom_adj and oom_score. You can control which process is most
likely to be killed.

On Mon, Apr 19, 2010 at 12:53 AM, Johan De Meersman vegiv...@tuxera.be wrote:


 On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen eric.ber...@gmail.com wrote:

 Usually I prefer to have linux kill processes rather than excessively
 swapping. I've worked on machines before that have swapped so badly

 I guess you never had the OOM killer randomly shooting down your SSH daemon
 on a machine hundred of miles away, then :-)


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to corrupt a database please???

2010-04-18 Thread Eric Bergen
A couple good tests are.

1. Corrupt a relay log. For this you can stop the sql thread, cat
/dev/urandom over the newest relay log, start the sql thread and watch
it fail.
2. Change the innodb_log_file_size in my.cnf without going through the
proper procedure to remove the old log files. In 5.0 this will cause
incorrect information in frm file errors for queries which will take a
little bit of work to track down.
3. Can some random data over myisam files and run a check tables so
mysql marks them as crashed.

On Sun, Apr 18, 2010 at 11:13 AM, Rob Wultsch wult...@gmail.com wrote:
 On Sun, Apr 18, 2010 at 11:07 AM, Jim Lyons jlyons4...@gmail.com wrote:
 You can remove the innodb logs and/or the innodb data file.  You can also
 remove some of the individual .idb files (if you're using file-per-table
 option).


 He originally asked about how to provide a training excise about
 repairing a db. How the hell do you repair from not having data files?
 For that matter the recovery from lacking log files (and assuming a
 crashed state) is imho ugly as hell.

 --
 Rob Wultsch
 wult...@gmail.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-18 Thread Eric Bergen
Linux will normally swap out a few pages of rarely used memory so it's
a good idea to have some swap around. 2G seems excessive though.
Usually I prefer to have linux kill processes rather than excessively
swapping. I've worked on machines before that have swapped so badly
that it took minutes just to ssh to them. This is effectively a
failure scenario that can last for a lot longer than it takes to
restart/failover mysqld. For apache it means the clients will see
errors until the load balancer health check drops the server out of
rotation. The best solution in all cases is to keep an eye on swap
in/out and memory usage so neither the crash nor the excessive
swapping becomes a problem.

On Wed, Apr 14, 2010 at 3:06 AM, Glyn Astill glynast...@yahoo.co.uk wrote:
 --- On Wed, 14/4/10, Dan Nelson dnel...@allantgroup.com wrote:

 Hammerman said:
  My organization has a dedicated MySQL server. The
 system has 32Gb of
  memory, and is running CentOS 5.3.  The default
 engine will be InnoDB.
  Does anyone know how much space should be dedicated to
 swap?

 I say zero swap, or if for some reason you NEED swap (for
 crashdumps maybe,
 but I didn't think Linux supported that), no more than
 2GB.  With that much
 RAM, you don't ever want to be in the state where the OS
 decides to page out
 8GB of memory (for example) to swap.  We have a few
 Oracle servers with
 between 32 and 48 GB of memory and they all live just fine
 without swap.


 But surely better to have a server that is paging out and has slowed to a 
 crawl than one where the oom killer starts killing off your processes, with 
 no swap I'd be turning overcommit off.





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Recommended swap partition size

2010-04-18 Thread Eric Bergen
The impact of swap activity on performance is dependent on the rate at
which things are being swapped and the speed of swapping.  A few pages
per second probably won't kill things but in this case it was swapping
hundreds of pages per second which killed performance. Disks are much
slower than ram.

In my environment mysqld_safe will failover the cluster if it detects
mysqld has crashed so I prefer the quick crash and failover to the
long period of slow response time. Many operators prefer the long slow
response time, I don't but it's a religious debate.

On Sun, Apr 18, 2010 at 12:13 PM, Rob Wultsch wult...@gmail.com wrote:
 On Sun, Apr 18, 2010 at 12:04 PM, Eric Bergen eric.ber...@gmail.com wrote:
 Linux will normally swap out a few pages of rarely used memory so it's
 a good idea to have some swap around. 2G seems excessive though.
 Usually I prefer to have linux kill processes rather than excessively
 swapping. I've worked on machines before that have swapped so badly
 that it took minutes just to ssh to them. This is effectively a
 failure scenario that can last for a lot longer than it takes to
 restart/failover mysqld. For apache it means the clients will see
 errors until the load balancer health check drops the server out of
 rotation. The best solution in all cases is to keep an eye on swap
 in/out and memory usage so neither the crash nor the excessive
 swapping becomes a problem.



 Umm, you were probably horribly over io utilized. Swapping by itself
 will not kill perforance I have some boxes where mysql has leaked a
 metric crap ton of memory and swapping is ok. The leaked memory is
 swapped out and sits out in swap. Every now and a again I create more
 swap to keep the server happy.

 Swapping is often preferable to crash with unplanned downtime.

 Note that innodb_flush_method can implact this...


 --
 Rob Wultsch
 wult...@gmail.com




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Multiple table engine

2010-04-18 Thread Eric Bergen
This can become a problem when using replication. For example if you do:

begin;
insert into innodb_table;
insert into myisam_table;
insert into innodb_table;
rollback;

The innodb rows won't be replicated but the myisam row will.  There is
more info at:
http://dev.mysql.com/doc/refman/5.0/en/innodb-and-mysql-replication.html

On Thu, Apr 8, 2010 at 4:02 AM, Jigal van Hemert ji...@xs4all.nl wrote:
 Tompkins Neil wrote:

 Just looking for some confirmation that under a single database - I assume
 it is perfectly normal to have both MyISAM and InnoDB engines for
 different
 tables ?  Is there anything I need to be aware of ?

 In most case no problems. MySQL can mix engines without problems.
 Every engine uses it's own specific buffers, so if your database becomes big
 and memory becomes an issue (large buffers needed) it might be handy to use
 only one engine (so you can set the buffers for the other engine(s) to a
 very low number).
 On a daily basis I use databases with mixed MyISAM and InnoDB tables.

 --
 Jigal van Hemert.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqld_safe

2010-04-18 Thread Eric Bergen
It's distributed as part of mysql. The script is responsible for
restarting mysqld if it exits with a non zero return code such as when
it crashes.

On Mon, Mar 29, 2010 at 2:40 PM, Colin Streicher
co...@obviouslymalicious.com wrote:
 Yeah, its just a shell script that acts as a wrapper around the mysql 
 processes on debian systems(
 maybe others, I'm not sure)
 You can read it at 'less /usr/bin/mysqld_safe'

 Colin

 On March 29, 2010 11:51:36 am Glyn Astill wrote:
 --- On Mon, 29/3/10, Brown, Charles cbr...@bmi.com wrote:
  Hello All.  when I issued this
  command: ps -e | grep,  I noticed that mysqld_safe was
  up running in my system.
  My question is:  what is mysqld_safe and why was it
  running. Please help.
 
 
  [sp...@naxbmisq03 ~]$ ps -e | grep -i mysql
  11989 ?        00:00:00 mysqld_safe
  12025 ?        13:28:39 mysqld

 My understanding is that mysqld_safe is the process used to start mysqld
  and handle any runtime errors, crashes etc.


 --
 Anyone who has had a bull by the tail knows five or six more things
 than someone who hasn't.
                -- Mark Twain

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Is myisam_repair_threads considered safe

2009-09-17 Thread Eric Bergen
It's been in mysql for at least 7 years.

http://ebergen.net/wordpress/2009/04/11/longest-beta-ever-myisamchk-parallel-recover/

On Thursday, September 10, 2009, Rob Wultsch wult...@gmail.com wrote:
 myisam_repair_threads

 If this value is greater than 1, MyISAM  table indexes are created in
 parallel (each index in its own thread) during the Repair by sorting
 process. The default value is 1.
 Note :Multi-threaded repair is still beta-quality code.

 The note is present is all versions of MySQL manual:

 http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_myisam_repair_threads
 http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_myisam_repair_threads
 http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html#sysvar_myisam_repair_threads


 Can anyone comment about whether this setting is safe, and if so on
 what major versions?

 --
 Rob Wultsch
 wult...@gmail.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Fail to change MySQL data directory on ubuntu

2009-08-27 Thread Eric Bergen
That procedure is horribly incorrect. You should simply move the
ib_log and ibdata files with the rest of the datadir. The ibdata1 file
contains innodb's system tables and depending on your setting of
innodb_file_per_table it also contains your data!

On Thu, Aug 27, 2009 at 7:56 AM, Jia Chenchen.1...@gmail.com wrote:
 Chen, Did you really delete ibdata1 ?

 Yes, I did.

 Best,
 Jia


 Claudio Nanni wrote:


 2009/8/26 chen jia chen.1...@gmail.com mailto:chen.1...@gmail.com

    Hi there,

    I am using MySQL on ubuntu 8.04.

    I followed this link

  http://www.ubuntu-howto.info/howto/how-to-move-mysql-databases-to-another-location-partition-or-hard-drive
    to change the data directory of MySQL.

    After stopping MySQL: sudo /etc/init.d/mysql stop

    I make a new directory: sudo mkdir /media/disk/MySQL_data

    then change the ownership of new directory, sudo chown mysql:mysql
    /media/disk/MySQL_data

    and copy all data to the new directory, cp -r -p /var/lib/mysql/*
    /media/disk/MySQL_data/ and deleted all files like ibdata1,


 Chen, Did you really delete ibdata1 ?






    ib_logfile0, and ib_logfile1.

    I then edit /etc/mysql/my.conf and update the datadir to my new
    directory. I also update /etc/apparmor.d/usr.sbin.mysql so that news
    lines with /var/lib/mysql replaced by /media/disk/MySQL_data are
    added.

    However, after sudo /etc/init.d/apparmor reload

    I try sudo /etc/init.d/mysql start

    I got
    * Starting MySQL database server mysqld
           [fail]

    If I change the datadir line in /etc/mysql/my.conf back to the
    original one, I can start MySQL successfully.

    I think I have done everything needed to change MySQL data directory.

    Why am I still getting this error?  Where can I start to look for
    the causes?

    Thanks.

    Jia

    --
    MySQL General Mailing List
    For list archives: http://lists.mysql.com/mysql
    To unsubscribe:
 http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why doesn't mySQL stop a query when the browser tab is closedL

2009-07-25 Thread Eric Bergen
), and then the
  application code
   talks to the MySQL server. The only part of this chain that
  knows what the
   MySQL server is doing is the last bit, the application
  code, which is
   typically waiting for a response.
  
   Getting back to the user, HTTP itself is a stateless
  protocol. That means
   the web server has no way of knowing if the user, the
  browser, or even the
   user's computer is still there; it also doesn't really know
  what the user
   last did (it's up to the application code to remember that somehow).
  
   In order for an end user to cancel a query, there would
  have to be some way
   for the user to tell the browser to tell the web server to tell the
   application code to tell the MySQL server to stop. I'm
  pretty sure you could
   create a tired of waiting button for the user, but I
  haven't done it
   myself.
  
   Regards,
  
   Jerry Schwartz
   The Infoshop by Global Information Incorporated
   195 Farmington Ave.
   Farmington, CT 06032
  
   860.674.8796 / FAX: 860.674.8341
  
   www.the-infoshop.com
  
  
  
  
  
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:    http://lists.mysql.com/mysql?unsub=jschwa...@the-
  infoshop.com
  
  
  
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
  http://lists.mysql.com/mysql?unsub=dstepli...@gmail.com
  
  
 
 
 
  --
  A: It reverses the normal flow of conversation.
  Q: What's wrong with top-posting?
  A: Top-posting.
  Q: What's the biggest scourge on plain text email discussions?
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why can't I kill the query cache?

2009-05-29 Thread Eric Bergen
You can also flush the cache with echo 1  /proc/sys/vm/drop_caches if
you have a new enough kernel.

On Fri, May 29, 2009 at 2:16 PM, Dan Nelson dnel...@allantgroup.com wrote:
 In the last episode (May 29), Gerald L. Clark said:
 Little, Timothy wrote:
  Also titled, I want this to run slow ALL the time...
 
  I have a group of dreadful queries that I have to optimize.
 
  Some take 20-30 seconds each -- the first time that I run them.  But
  then they never seem to take that long after the first time (taking less
  than a second then).  If I change the keywords searched for in the
  where clauses, then they take a long time again...  so it's the
  query-cache or something just like it.
 
  BUT, I am doing this each time :
  flush tables;
  reset query cache;
  set global query_cache_size=0;
  SELECT   SQL_NO_CACHE DISTINCT ca.conceptid AS headingid,
 
  And still it's not avoiding the cache.
 
  Is there a cache I'm missing?
 
  Tim...
 
 
 Disk cache, but I don't know how to clear it.

 Create a file 2x the size of your RAM (for a 2gb system, dd if=/dev/zero
 of=bigfile bs=1024k count=4096), then dd it to /dev/null (dd if=bigfile
 of=/dev/null bs=1024k).  That should flush your OS cache.  The guaranteed
 way would be to dismount then remount your filesystem, but that could be
 difficult depending on how many other processes are using it..

 --
        Dan Nelson
        dnel...@allantgroup.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question on replication terminology

2009-04-29 Thread Eric Bergen
Dual master replication can be either dual master dual write or dual
master single writer. The latter is preferred. In this configuration
replication is connected in both directions but clients only ever
connect to one master at a time. It's just as safe as master - slave
replication if you handle the failover correctly.

-Eric

On Tue, Apr 28, 2009 at 3:43 PM, Claudio Nanni claudio.na...@gmail.com wrote:
 Hi there,
 I would only like to stress that the only supported (and recommended)
 replication solution in MySQL is
 Master---Slave  replication.
 In this scenario you can have ONLY one master and (virtually) any number of
 slaves.
 There is NO other safe replication solution.
 The terms you mention seems to refer to the same solution, where you have
 two servers each acting as a master:
 this is a non standard dangerous scenario in MySQL and requires application
 logic awareness.

 Hope to have brought a little light in your mind

 Cheers
 Claudio



 Vikram Vaswani wrote:

 Hi

 I'm new to replication and looking through some docs on how to use it.
 Could
 someone please tell me if the following terms mean the same thing or, if
 not, what is the difference:

 master-master replication
 dual-master replication
 bidirectional replication

 TIA
 -BT




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: innodb_thread_concurrency at runtime in 4.1 ?

2009-04-27 Thread Eric Bergen
Hi,

I checked the 4.1.24 source and I don't see innodb_thread_concurrency
in the list of variables that can be set at runtime. I only see:

innodb_max_dirty_pages_pct,
innodb_max_purge_lag,
innodb_table_locks,
innodb_max_purge_lag,
innodb_autoextend_increment,

-Eric

On Sun, Apr 26, 2009 at 11:48 PM, Alagar samy a.alagars...@yahoo.co.in wrote:
 Hi,

 i am using mysql_server-4.1.23 and in the documentation of 4.1
 (http://dev.mysql.com/doc/refman/4.1/en/dynamic-system-variables.html) , it
 is mentioned innodb_thread_concurrency variable can be set at runtime.

 but i am getting error when trying to set dynamically ..

 mysql SET GLOBAL innodb_thread_concurrency=4;
 ERROR 1193 (HY000): Unknown system variable 'innodb_thread_concurrency'

 still throws error when tried as set @innodb_thread_concurrency and set
 global @innodb_thread_concurrency ...

 can you please let me know whether 4.1 documentation is misleading or  this
 is a bug in mysql-4.1 (as in, this variable cannot be set at run-time) ?


 PS : i am able to set innodb_thread_concurrency variable at runtime in
 mysql-5.1 .. but this is not an option for me now .. i have to go with 4.1
 at this point ..


 A.Alagarsamy

 
 Now surf faster and smarter ! Check out the new Firefox 3 - Yahoo! Edition *
 Click here!



-- 
Eric Bergen
eric.ber...@gmail.com
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Sun bought by Oracle

2009-04-20 Thread Eric Bergen
I don't think the purchase was about MySQL at all. I'm sure Oracle is
far more interested in java, zfs, and solaris than they are MySQL.
MySQL is just an added bonus that will go well with their acquisition
of Innobase Oy a few years ago. Oracle didn't kill InnoDB and it's not
very likely that they will kill MySQL. If you walk around the user
conference this week you will see why it would be incredibly stupid
try to try kill MySQL. Not that companies like Percona would let that
happen anyway. After all it is GPL.

On Mon, Apr 20, 2009 at 10:02 AM,
michael.coll-ba...@verizonwireless.com wrote:


 From: Peter Brawley [mailto:peter.braw...@earthlink.net]

 An optimist wrote that. A pessimist, Larry Dignan at
 http://blogs.zdnet.com/BTL/?p=16598tag=nl.e539, wrote point six:

 Oracle gets to kill MySQL. There's no way Ellison will let that
 open source database mess with the margins of his database.
 MySQL at best will wither from neglect. In any case, MySQL
 is MyToast.

 We ought to know who's right within half a year.


 Is MySQL not Open Source?  Heck, I'd love to help with that.  We would
 just have to change the name to something befitting the product.  MySQL
 still tickles my MS warning alerts; My Pictures, My Music, My Videos,
 etc.  How about 'GNO', pronounced like 'Know', stands for GNO's Not
 Oracle?


 The information contained in this message and any attachment may be
 proprietary, confidential, and privileged or subject to the work
 product doctrine and thus protected from disclosure.  If the reader
 of this message is not the intended recipient, or an employee or
 agent responsible for delivering this message to the intended
 recipient, you are hereby notified that any dissemination,
 distribution or copying of this communication is strictly prohibited.
 If you have received this communication in error, please notify me
 immediately by replying to this message and deleting it and all
 copies and backups thereof.  Thank you.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com





-- 
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: flush-host problem

2009-04-11 Thread Eric Bergen
You can set max_connect_errors=9 to disable the host
blocking. It's common to do this with load balancers because tcp/ip
health checks count as connection errors.

-Eric

On Sat, Apr 11, 2009 at 1:01 PM, Shawn Green shawn.gr...@sun.com wrote:

 Hello Gary,

 Gary Smith wrote:

 Mike,
 It's not a connection pooling issue per say.  We have several boxes running 
 spam assassin, sqlgrey and postfix (via mysql).  Normally these components 
 work great.  SA and sqlgrey both have a fixed number of connections, around 
 16, that they are generally actively using unless we get a burst of email, 
 at which time they will increase by a few.  The problem is that Postfix has 
 been receiving a higher level than normal of emails as we have taken 50% of 
 our servers offline at this location (setting them up at a new location).  
 We're also have this bouncing across a couple different firewalls, so for 
 some reason, the conneciton to mysql is generating a larger number of these:

 090407 12:26:42 [Warning] Aborted connection 972479 to db: 'db' user: 'user' 
 host: 'host' (Got an error reading communication packets)

 We do know the network isn't optimal right now and are working to fix the 
 issues but we are hoping to get by just for the short term.

 But that leads back to the original question about increase the connection 
 error cutoff before banning a host.

 We are using 5.1.32 with INNODB tables.

 
 From: mos [mo...@fastmail.fm]
 Sent: Tuesday, April 07, 2009 9:18 AM
 To: mysql@lists.mysql.com
 Subject: Re: flush-host problem

 At 10:39 AM 4/7/2009, Gary Smith wrote:

 I have system that is generating a larger than normal number of connection
 errors.  We know why the errors are occuring and are working to resolve
 them (connectivity and load issue on the client).  The question is, how
 can I tweak mysql to tolerate a higher level than normal of bad
 connections before banning the host.

 What happens is that when we have 300-500 connections a few random ones
 will get mucked up during a heavier than normal load on the client.  I
 have set the max connections to 3000 (which we never get close to).

 So, if there a config/startup setting to tweak to ease the banning of bad
 connetions thus reducing the need for me to continually mysqladmin
 flush-host on the server?
 --

 ...

 The server generates those Aborted connection messages under the following 
 circumstances:

 1) The server was trying to pass information to the client and the client 
 stopped being there.

 2) The client sat idle beyond the wait_timeout setting.

 3) There was some kind of networking interference (such as a VPN or proxy 
 server closing an idle connection)  or bad connections.

 Look at your SHOW PROCESSLIST report. If it shows many connections that have 
 been idle for more than 30 seconds or so, then you need to change your client 
 software behavior. If these are pooled connections, make sure your idle 
 connection timeout for the pool (part of the pool configuration) is set to a 
 value smaller than wait_timeout.  Otherwize you need to audit your client 
 code to ensure that it is calling the appropriate close() function for the 
 library it is using to connect to the MySQL server. That will release the 
 connection and allow the server to reallocate those resources to handling the 
 commands for the live (and not idle) connections.

 For additional reasons for these errors, please peruse:
 http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html

 Warmest regards,

 --
 Shawn Green, MySQL Senior Support Engineer
 Sun Microsystems, Inc.
 Office: Blountville, TN



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com




--
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL Closing/Opening tables

2009-02-27 Thread Eric Bergen
MySQL can open a single table multiple times depending on how many
clients need to use it. This means that having a table_cache the same
as the total_tables will only work if your mysql server only has one
client.

For more details read:
http://dev.mysql.com/doc/refman/5.0/en/table-cache.html

On Fri, Feb 27, 2009 at 2:53 PM,  dbrb2002-...@yahoo.com wrote:
 Thanks Dan.. thats a valuable point.. and this actually happening with MyISAM 
 tables only..

 But the question is; when I set the table_cache to higher than total tables.. 
 then it should stop closing the table in first place..so that only un-opened 
 tables will be opened and kept in cache.. it will avoid closing and 
 re-opening.. but looks like it is not the case..

 Unless the table_cache is also used(unlikely) for temporary tables which are 
 created by select queries..




 
 From: Dan Nelson dnel...@allantgroup.com
 To: dbrb2002-...@yahoo.com
 Cc: mysql@lists.mysql.com
 Sent: Friday, February 27, 2009 1:15:25 PM
 Subject: Re: MySQL Closing/Opening tables

 In the last episode (Feb 27), dbrb2002-...@yahoo.com said:
 Recently I noticed the server takes lot of time on and off when opening
 and closing tables.  And I tried to increase the table_cache more the the
 total tables (file_limit is properly set); and the problem still continues
 and lowering it also continues..  and tried to set in middle..  same

 MyISAM tables flush dirty index blocks at the end of every update; this can
 cause a long wait inside closing tables.  If you have just deleted a lot
 of rows or did some other update touching many rows, you might have to flush
 a lot of dirty blocks.  Running show status like 'Key_blocks_not_flushed'
 during one of these periods should show the count starting out large,
 dropping rapidly, then leveling off when that table's blocks have been
 flushed.

 Fixes include:

 * Altering your troublesome tables and adding the DELAY_KEY_WRITE=1 option.
  This will force you to repair those tables after a mysql or OS crash,
  since the on-disk copies of the index will almost always be out of synch.

 * Switching to an engine with logging like InnoDB will allow mysql to write
  the changes to a transaction log immediately, then trickle out the actual
  key block updates over time.  If you want to try out mysql 6.0, the maria
  engine is basically MyISAM with logging.

 --
    Dan Nelson
    dnel...@allantgroup.com




-- 
Eric Bergen
eric.ber...@provenscaling.com
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how to disable InnoDB and MyISAM on ndb cluster

2009-01-01 Thread Eric Bergen
Hi Nipuna,

InnoDB can be disabled with the skip-innodb option. MyISAM can't
really be disabled because it's required to read the grant tables.

-Eric

On Sat, Dec 27, 2008 at 4:21 AM, Nipuna Perera nipu...@gmail.com wrote:
 Hi All,

 I'm using mysql-cluster-gpl-6.2.15 for create DB cluster in our server
 machines,
 currently I'm using 2 data nodes, 2 management nodes and two mysqld server
 nodes which were combine with same data directory in SAN.
 SAN for the collect binary logs only.

 What need to know are,
 1. Is it possible to disable InnoDB and MyISAM engines while using the ndb
 cluster in mysqld servers, if it is yes, can you tell me the way of doing
 it?
 2. Is there having any disterbance for ndb cluster by disabling the InnoDB
 and MyISAM?

 Thanks and Regards,

 --
 Nipuna Perera
 නිපුණ පෙරේ‍රා
 http://nipunaperera.blogspot.com




-- 
high performance mysql consulting.
http://provenscaling.com


Re: Where should I download mysql 4.0.12?

2008-11-06 Thread Eric Bergen
If you still want it you can download it from the Proven Scaling mirror.

http://mirror.provenscaling.com/mysql/community/source/4.0/

-Eric

On Thu, Nov 6, 2008 at 1:58 AM, Joerg Bruehe [EMAIL PROTECTED] wrote:
 Hi!


 Moon's Father wrote:
 Hi.
Where can I download a mysql 4.0.12? I found nowhere to download it.

 You will not find it anywhere at MySQL, it is out of support since more
 than 2 years (September 2006).
 For an explanation, read the text here:
   http://downloads.mysql.com/archives.php?p=mysql-4.0

 As regards 4.0.12 in specific: That version is totally obsolete, the
 last published version of the 4.0 series was 4.0.27.
 Anybody installing 4.0.12 now would miss many security fixes and so run
 great risks.
 (This is valid for all unsupported versions, of course - the older a
 version is, the more security fixes will be missing.)


 Jörg

 --
 Joerg Bruehe,  MySQL Build Team, [EMAIL PROTECTED]
 Sun Microsystems GmbH,   Sonnenallee 1,   D-85551 Kirchheim-Heimstetten
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
 Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How Can I Auto Delete my Binary Files?

2008-07-10 Thread Eric Bergen
Shaun,

Purge master logs is available in 4.0 it just doesn't have the
before key word. It should be trivial to write a script that decides
which log file to purge based on the mtime.

-Eric

On Thu, Jul 10, 2008 at 12:15 PM, Shaun Adams [EMAIL PROTECTED] wrote:
 I'm running a RHEL 4.1 and MySQL 4.0.26

 so a lot or the more recent commands available in 5.0.x aren't available to
 me. Does anyone know of any scripts or anything I can use to delete files
 that arent being used or run by my slave servers? It's pretty safe to say
 that I can delete log files older than 7 days so that can eliminate the need
 to check for open files.







-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why open-files-limit smaller than table_cache?

2008-06-08 Thread Eric Bergen
Hi,

It's true that MyISAM keeps two files descriptors open per open table
but this is MyISAM specific. An open table in MySQL isn't a one to one
mapping to file descriptors, it's simply a c++ object. It's up to the
storage engine how it manages file descriptors. Engines can use worker
threads with their own descriptors or like the heap engine use none at
all. it's possible to have many more open tables than file
descriptors.

-Eric

On Mon, Jun 2, 2008 at 9:54 AM, Jean-Paul Bauer [EMAIL PROTECTED] wrote:
 Hi all,
 I'm a bit confused about the allowed range of values for the
 open-files-limit and the table_cache settings. The documentation
 states that the range of values allowed for open-files-limit is
 0-65535 and for table_cache it is 1-524288.

 Where I get confused is that from my understanding each table in the
 table_cache will require at least one file descriptor (two if it is a
 MyISAM) table. Surely this means that the maximum for table_cache
 effectively is 65535. What am I misunderstanding or missing here?

 Documentation links:
 http://dev.mysql.com/doc/refman/5.0/en/server-options.html#option_mysqld_open-files-limit
 http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_table_cache

 Thanks for any clarification on this.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Database cache corrupted

2008-04-27 Thread Eric Bergen
Can you send us the exact zabbix error?

On Sun, Apr 27, 2008 at 6:18 PM, Sergio Belkin [EMAIL PROTECTED] wrote:
 So anyone had ever had any problem with database cache? :)

  2008/4/25 Sergio Belkin [EMAIL PROTECTED]:


  Hi, I am using zabbix (monitoring software) with mysql. zabbix goes
zombie and complains with messages suggesting that Database cache
perhaps is  corrupted. How can I check and fix it?  I am using Centos
5.1, cpu Intel(R) Xeon(R) CPU and 1 Gb of RAM.
  
  
my.cnf is as follows:
  
[client]
port= 3306
socket  = /var/lib/mysql/mysql.sock
[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
back_log = 50
max_connections = 100
max_connect_errors = 10
table_cache = 1024
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 128M
sort_buffer_size = 8M
join_buffer_size = 3M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 4
default_table_type = InnoDB
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log_slow_queries = /var/log/mysqld/slow-query-log
long_query_time = 5
log_long_format
tmpdir = /tmp
log_queries_not_using_indexes = /var/log/mysqld/not-indexes.log
expire_logs_days = 2
server-id = 1
key_buffer_size = 8M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
skip-bdb
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 600M
innodb_data_file_path = ibdata1:128M;ibdata2:50M:autoextend:max:12800M
innodb_file_io_threads = 4
innodb_thread_concurrency = 16
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 8M
innodb_log_file_size = 256M
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 120
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[myisamchk]
key_buffer = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 8192
  
EOF
  
Thanks in advance!
  
--
--
Open Kairos http://www.openkairos.com
Watch More TV http://sebelk.blogspot.com
Sergio Belkin -
  



  --
  --
  Open Kairos http://www.openkairos.com
  Watch More TV http://sebelk.blogspot.com
  Sergio Belkin -

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-21 Thread Eric Bergen
TCP checksums aren't as strong as encryption. It's rare but corruption
can happen.

Where are you reading the positions from and how are you taking the
snapshot to restore the slave?


On Mon, Apr 21, 2008 at 12:30 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 Eric Bergen schrieb:

  Hi Jan,
  
   You have two separate issues here. First the issue with the link
   between the external slave and the master. Running mysql through
   something like stunnel may help with the connection and data loss
   issues.
  
  I wonder how any corruption could happen on a TCP connection as TCP has
  its own checksums and a connection would break down in case of a missing
  packet?

  The second problem is that your slave is corrupt. Duplicate key errors
   are sometimes caused by a corrupt table but more often by restarting
   replication from an incorrect binlog location. Try recloning the slave
   and starting replication again through stunnel.
  
  The duplicate key errors happen after I start at the beginning of a
  logfile (master_log_pos=0) when the positions that mysql reports as its
  last positions is not working.

  I think I have 2 issues:
  #1: how can this kind of binlog corruption happen on a TCP link although
  TCP has its checksums and resends lost packets?

  #2: why does mysql report a master log position that is obviously wrong?
  mysql  reports log-posion 172 which is not working at all in a change
  master to command, my only option is to start with master_log_pos=0 and
  the number of duplicate key errors and such that I have to skip after
  starting from master_log_pos=0 shows me that the real position that
  mysql has stopped processing the binlog must be something in the
  thousands or tenthousands and not 172?!

  Jan




-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Working with Images

2008-04-21 Thread Eric Bergen
Victor,

Please give us an example with the python removed and include the
actual syntax error.


-Eric

On Fri, Apr 18, 2008 at 8:41 AM, Victor Subervi [EMAIL PROTECTED] wrote:
 Hi;
  The python code works properly, so I assume this is a strictly MySQL
  question now :)
  If I grab an image in the database thus:

   sql = select pic1 from products where id=' + str(id) + ';
   cursor.execute(sql)
   pic1 = cursor.fetchall()[0][0].tostring()
  #  pic1 = cursor.fetchall()[0][0]  // either this or the above line

  and try and re-insert it thus:

   cursor.execute('update products set pic1=%s where id=%s, ;',
  (pic1, id))

  it tells me I have an error in my MySQL syntax. What is the error?
  TIA,
  Victor




-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-20 Thread Eric Bergen
Hi Jan,

You have two separate issues here. First the issue with the link
between the external slave and the master. Running mysql through
something like stunnel may help with the connection and data loss
issues.

The second problem is that your slave is corrupt. Duplicate key errors
are sometimes caused by a corrupt table but more often by restarting
replication from an incorrect binlog location. Try recloning the slave
and starting replication again through stunnel.

-Eric

On Tue, Apr 15, 2008 at 1:11 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote:
 I have a setup with a master and a bunch of slaves in my LAN as well as
  one external slave that is running on a Xen-Server on the internet.
  All servers run Debian Linux and its mysql version 5.0.32
  Binlogs are around 2 GB per day. I have no trouble at all with my local
  slaves, but the external one hangs once every two days.
  As this server has no other problems like crashing programs, kenrel
  panics, corrupted files or such, I am pretty sure that the hardware is OK.

  the slave's log:

  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [ERROR] Error
  reading packet from server: Lost connection to MySQL server during query
  ( server_errno=2013)
  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave
  I/O thread: Failed reading log event, reconnecting to retry, log
  'mysql-bin.045709' position 7334981
  Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave:
  connected to master '[EMAIL PROTECTED]:1234',replication resumed in log
  'mysql-bin.045709' at position 7334981
  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
  in Log_event::read_log_event(): 'Event too big', data_len: 503316507,
  event_type: 16
  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
  reading relay log event: slave SQL thread aborted because of I/O error
  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Slave:
  Could not parse relay log event entry. The possible reasons are: the
  master's binary log is corrupted (you can check this by running
  'mysqlbinlog' on the binary log), the slave's relay log is corrupted
  (you can check this by running 'mysq
  lbinlog' on the relay log), a network problem, or a bug in the master's
  or slave's MySQL code. If you want to check the master's binary log or
  slave's relay log, you will be able to know their names by issuing 'SHOW
  SLAVE STATUS' on this slave. Error_code: 0
  Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
  running query, slave SQL thread aborted. Fix the problem, and restart
  the slave SQL thread with SLAVE START. We stopped at log
  'mysql-bin.045709' position 172
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
  I/O thread killed while reading event
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
  I/O thread exiting, read up to log 'mysql-bin.045709', position 23801854
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
  SQL thread initialized, starting replication in log 'mysql-bin.045709'
  at position 172, relay log './db-extern-relay-bin.01' position: 4
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
  I/O thread: connected to master '[EMAIL PROTECTED]:1234',  replication
  started in log 'mysql-bin.045709' at position 172
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Error
  reading packet from server: error reading log entry ( server_errno=1236)
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Got
  fatal error 1236: 'error reading log entry' from master when reading
  data from binary log
  Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
  I/O thread exiting, read up to log 'mysql-bin.045709', position 172

  slave start;
  doesn't help.

  slave stop, reset slave; change master to
  master_log_file=mysql-bin.045709, master_log_pos=172;slave start
  does not help as well

  the only way to get this up and running again is to do a change master
  to master_log_file=mysql-bin.045709, master_log_pos=0 and use
  sql_slave_skip_counter when I get duplicate key errors. this sucks.
  When this problem occurs, the log positions are always small number, I
  would say less than 500.

  I also get connection errors in the log from time to time, but it
  recovers itself:
  Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [ERROR] Error
  reading packet from server: Lost connection to MySQL server during query
  ( server_errno=2013)
  Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave
  I/O thread: Failed reading log event, reconnecting to retry, log
  'mysql-bin.045705' position 34671615
  Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave:
  connected to master '[EMAIL PROTECTED]:1234',replication resumed in log
  'mysql-bin.045705' at position 34671615

  Sometimes I have
  Apr 13 23:22:04 

Re: Does version 4 mysqlcheck close MyISAM tables left open?

2008-04-20 Thread Eric Bergen
Nicole,

The tables left open warning is from the tables that were left open
when the server was rebooted. Internally mysql keeps a counter per
table of the number of clients that have a table open. When a table is
closed this counter is decremented. If mysql is improperly shutdown
this counter doesn't get decremented. A mysqlcheck (repair table
query) will reset the counter.

-Eric

On Tue, Apr 15, 2008 at 1:12 PM, Garris, Nicole
[EMAIL PROTECTED] wrote:
 Unable to find this in the manual ...

  Yesterday morning we rebooted the server by accident, which crashed and
  restarted MySQL 4.1. Late last night a scheduled job ran mysqlcheck and
  found 4 open tables. When I next ran mysqlcheck it found nothing wrong.

  mysqlcheck command:
  CHECK TABLE $DBTABLES $TYPE1 $TYPE2 | mysql --host=$DBHOST -t -u$USER
  -p$PASSWORD $i
  where $TYPE1 and $TYPE2 are empty.

  warning message from the mysqlcheck command:
  expressionengine_dof_public.exp_stats   check   warning 2 clients are
  using or haven't closed the table properly





-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb vs myisam

2008-04-06 Thread Eric Bergen
I don't see what the issue is. As Jay said the row counts in explain
outputs are estimates. When running an explain query MySQL asks the
storage engine how many rows it thinks are between a set of values for
an index. Different storage engines use different methods to calculate
row count. Both innodb and myisam estimate the row count  based on
statistics they keep on the distribution of keys in an index. MyISAM
is more accurate than innodb with it's row count because of how it
keeps statistics. Analyze table on a myisam table will count the
number of unique values in an index
(myisam/mi_check:update_key_parts). Innodb samples the key
distribution in 8 different pages per index and does some calculations
based on the tree structure of those pages (details
innobase/btr/btr0cur.c:btr_estimate_number_of_different_key_vals).


On Sun, Apr 6, 2008 at 8:49 PM, Moon's Father [EMAIL PROTECTED] wrote:
 Just waiting for any reply .



  On Thu, Apr 3, 2008 at 11:01 PM, Jay Pipes [EMAIL PROTECTED] wrote:

   Please actually read my reply before asking the same question.  As I
   stated, InnoDB outputs *estimated* row counts in EXPLAIN, whereas MyISAM
   outputs *accurate* row counts.
  
   -jay
  
   Krishna Chandra Prajapati wrote:
  
Hi,
   
On myisam storage system
   
mysql explain select ui.user_id, ucp.user_id,ucp.payment_date from
user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
   

 ++-+---++---+---+-+-++-+
| id | select_type | table | type   | possible_keys |
key   | key_len | ref | rows   | Extra
|
   

 ++-+---++---+---+-+-++-+
|  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
user_course_pay_comp1 | 30  | NULL| *256721* | Using
index |
   
|  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
PRIMARY   | 10  | dip.ucp.user_id |  1 | Using index
|
   

 ++-+---++---+---+-+-++-+
2 rows in set (0.00 sec)
   
   
On innodb storage system
   
mysql  explain select ui.user_id, ucp.user_id,ucp.payment_date from
user_info ui, user_course_payment ucp where ui.user_id=ucp.user_id;
   

 ++-+---++---+-+-+-++-+
| id | select_type | table | type   | possible_keys |
key | key_len | ref | rows   | Extra
  |
   

 ++-+---++---+-+-+-++-+
|  1 | SIMPLE  | ucp   | index  | PRIMARY,user_course_pay_comp1 |
idx_user_course_payment | 9   | NULL| *256519* | Using
index
|
|  1 | SIMPLE  | ui| eq_ref | PRIMARY   |
PRIMARY | 10  | dip.ucp.user_id |  1 | Using
index |
   
   

 ++-+---++---+-+-+-++-+
2 rows in set (0.00 sec)
   
I have executed ANALYZE TABLE COMMAND on both the system (innodb and
myisam)
Yet there is a small difference. Highlighted in red color
   
Is it the behavior of myisam or innodb or interal working of the storage
engines.
   
Thanks,
Krishna
   
   
   
   
On Wed, Apr 2, 2008 at 9:06 PM, Rob Wultsch [EMAIL PROTECTED] wrote:
   
 On Wed, Apr 2, 2008 at 5:06 AM, Krishna Chandra Prajapati 
 [EMAIL PROTECTED] wrote:

  Horribly ugly stuff
 
   I know I sure as heck am not going to spend half an hour to turn
 those
 queries into something understandable, and I expect no one else will
 either.  If you want help please remove all extraneous details  (turn
 table
 and columns names in t1,t2,col1,col2, etc or descriptive names like
 parent,
 child, datetime_end)  and send out something that is easy to
 reproduce. You
 get a cupcake if you include ddl that populates itself with random
 data.

 Also, using /G instead of a semi colon will make database output a
 heck of
 a lot easier to read in email form.

 --
 Rob Wultsch
 [EMAIL PROTECTED]
 wultsch (aim)


   
   
   
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/[EMAIL PROTECTED]
  
  


  --
  I'm a mysql DBA in china.
  More about me 

Re: only myisam storage engine

2008-04-01 Thread Eric Bergen
You can set the skip-innodb option in my.cnf

-Eric

On Tue, Apr 1, 2008 at 9:28 PM, Ananda Kumar [EMAIL PROTECTED] wrote:
 Hi All,
  We have a linux box running debain, with 8 cpu and 8 GB RAM, we want only
  myisam engine to be running on this.

  So, should i not setup any innodb parameters or is there any other way to
  have only myisam engine running on this machine, please let me know.

  regards
  anandkl




-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connections on Database

2008-03-31 Thread Eric Bergen
You probably only want max_connections and not max_user_connections.
Max_user_connections is the number of times a user can connect per
hour..

-Eric

2008/3/31 Vidal Garza [EMAIL PROTECTED]:
 Velen escribió:


  Hi,
  
   How can i limit connections to my database?
  
   Regards,
  
   Velen
  
  max_user_connections = 50
  max_connections = 50


  --
  
  Ing. Vidal Garza Tirado
  Depto. Sistemas
  Aduanet S.A. de C.V.
  Tel. (867)711-5850 ext. 4346, Fax (867)711-5855.
  Ave. César López de Lara No. 3603 Int. B Col Jardín.
  Nuevo Laredo, Tamaulipas, México.



  --
  Este mensaje ha sido analizado por MailScanner
  en busca de virus y otros contenidos peligrosos,
  y se considera que está limpio.
  For all your IT requirements visit: http://www.aduanet.net




  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connections on Database

2008-03-31 Thread Eric Bergen
Oops. I was wrong on that one. max_user_connections is different from
global max_connections and max_connections per user.

2008/3/31 Eric Bergen [EMAIL PROTECTED]:
 You probably only want max_connections and not max_user_connections.
  Max_user_connections is the number of times a user can connect per
  hour..

  -Eric

  2008/3/31 Vidal Garza [EMAIL PROTECTED]:


  Velen escribió:
  
  
Hi,

 How can i limit connections to my database?

 Regards,

 Velen

max_user_connections = 50
max_connections = 50
  
  
--

Ing. Vidal Garza Tirado
Depto. Sistemas
Aduanet S.A. de C.V.
Tel. (867)711-5850 ext. 4346, Fax (867)711-5855.
Ave. César López de Lara No. 3603 Int. B Col Jardín.
Nuevo Laredo, Tamaulipas, México.
  
  
  
--
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
For all your IT requirements visit: http://www.aduanet.net
  
  
  
  
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  



  --
  high performance mysql consulting.
  http://provenscaling.com




-- 
high performance mysql consulting.
http://provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The Use database command is too slow

2008-02-02 Thread Eric Bergen
I think a better question is why do you have one database per user?

-Eric

On 2/2/08, imad [EMAIL PROTECTED] wrote:
 I am not connecting through MySQL. I am connecting through PHP. How
 can I speed it up?




 On Feb 3, 2008 12:20 AM, Saravanan [EMAIL PROTECTED] wrote:
  Hi,
 
  when you connect using mysql auto-rehash is enabled as default. It will 
  read and open the tables in the database when you try to use the database. 
  It may take time for database with many tables. use mysql with -A or 
  --no-auto-rehash.
 
  shell mysql -uroot -A
  shell mysql -uroot --skip-auto-rehash
 
  Saravanan
  MySQL DBA
 
 
  --- On Sat, 2/2/08, imad [EMAIL PROTECTED] wrote:
 
   From: imad [EMAIL PROTECTED]
   Subject: The Use database command is too slow
   To: mysql@lists.mysql.com
   Date: Saturday, February 2, 2008, 6:14 AM
 
   Hello,
  
   I have like 50K databases on my server, one for every user.
   I am experience delay in the command 'use database'
   which is issued by
   every user when he connects to the database.
  
   Can anyone here tell me what is my problem and how can I
   speed it up.
  
   thanks
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/[EMAIL PROTECTED]
 
 

  
  Be a better friend, newshound, and
  know-it-all with Yahoo! Mobile.  Try it now.  
  http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
high performance mysql consulting.
http://provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Ends Enterprise Server Source Tarballs

2007-08-13 Thread Eric Bergen
It's nothing to be concerned about because the source tar balls and
binaries are being mirrored at http://mirror.provenscaling.com/mysql/

-Eric

On 8/10/07, Daevid Vincent [EMAIL PROTECTED] wrote:
 Is this anything to be concerned about?
 We are Enterprise customers. We distribute mySQL on our appliance that
 we sell.

 It doesn't seem like we should worry, now. But I'm a little nervous
 about the future?

 http://linux.slashdot.org/article.pl?sid=07/08/09/2047231
 http://linux.slashdot.org/article.pl?sid=07/08/09/2047231from=rss
 from=rss

  http://www.linux.com/feature/118489
 http://www.linux.com/feature/118489

 ÐÆ5ÏÐ





-- 
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What's up with 5.1x beta

2007-08-13 Thread Eric Bergen
It could have to do with 5.1 not being released as alpha until 5.0 was
gamma. Here is a breakdown of the 4.0-5.1 releases and ho2 5.1 was
different from the previous versions.

http://ebergen.net/wordpress/2006/10/17/where-was-51/

-Eric

On 8/13/07, Ed Reed [EMAIL PROTECTED] wrote:
 Does anyone have an explanation as to why it's taking so long for an official 
 release of version 5.1? Is it me or does it seem like 5.1 is taking a lot 
 longer to be released. I don't remember previous versions having 20 beta revs 
 before an official release. Is there a real date available for an official 
 release?





-- 
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave backups master data

2007-07-16 Thread Eric Bergen

Hi Ofer,

I prefer to use mylvmbackup and lvm to backup mysql. It's snapshot
includes a copy of the relay-log.info file which has the
exec_master_log_pos and relay_master_log_file. These allow you to use
the snapshot to restore another slave. It's also much faster to take a
snapshot of the file system than it is to run mysqldump especially on
large datasets.

-Eric

On 7/16/07, Ofer Inbar [EMAIL PROTECTED] wrote:

We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to catch up from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the
application at that time.  However, the master data in that dump
file would refer to the slave's binary logs, not the real master.
That means we couldn't use that dump to start new replication slaves,
nor to restore a master and catch up (though the latter is less
important since we could could run these dumps more often).

One tactic that seems to work is to stop replication on the slave,
note the master data in show slave status, run a dump, and keep that
master data alongside that dump.  This is clunky for several reasons;
it's harder to automate the backup, harder to automate the restore,
and error-prone.

Is there any way to do a mysqldump of a replication slave and have
that dumpfile contain the master data for the master server?
  -- Cos

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: hang up mysql shell

2007-07-16 Thread Eric Bergen

Jeff,

CTRL-Z in linux doesn't hang up mysql, it suspends it. mysqld releases
the locks because the connection was broken when you issue 'exit'.
When using ctrl-z the client process still exists and it's connections
still exist so the locks won't be released. To bring a process out of
suspension type 'fg' it will move back to the foreground. CTRL-D is
typically the exit shortcut.

On 7/16/07, Jeff Pang [EMAIL PROTECTED] wrote:

hello,

When saying 'exit'  in mysql shell,mysqld would release the lock which was made 
before.
But if I do 'ctrl+z' under linux to hang up mysql shell,would mysqld also 
release the lock or not?
Thank you.

Viel oder wenig? Schnell oder langsam? Unbegrenzt surfen + telefonieren
ohne Zeit- und Volumenbegrenzung? DAS TOP ANGEBOT JETZT bei Arcor: günstig
und schnell mit DSL - das All-Inclusive-Paket für clevere Doppel-Sparer,
nur  39,85 €  inkl. DSL- und ISDN-Grundgebühr!
http://www.arcor.de/rd/emf-dsl-2

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Limitation of DRBD For MySQL

2007-06-18 Thread Eric Bergen

Hi,

You have hit most of the reasons. One other important one is that if a
table or filesystem is corrupted that corruption is propagated over to
the drbd slave. My opinion and several other reasons can be found
here:
http://ebergen.net/wordpress/2007/04/02/drbd-in-the-real-world/


-Eric

On 6/17/07, Mohd Irwan Jamaluddin [EMAIL PROTECTED] wrote:

Good day,

Someone gives me several limitations of DRBD For MySQL? Here is the list,

1. Idle resource – secondary host sits idle, wasted investment
2. Failover is not instant, nor transparent
-Cold standby failover
3. Recovery requires time to start / recover database
4. Recovery process can fail – requires reload
5. Requires database journal capability
-MySQL MyISAM does not work
6. Operation not continuous: planned downtime required
-Active-passive does not cover DB maintenance
-Anything that requires mounted disk
7. Does not address scaling or performance
8. OS Limitations – Some only run on Linux

May someone elaborate or disprove those points?
Thanks.

--
Regards,
Mohd Irwan Jamaluddin
Web: http://www.irwan.name/
Blog: http://blog.irwan.name/




--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: speedup mysql restore

2007-05-21 Thread Eric Bergen

The best way to speed up restores is to not use mysqldump at all.
Instead take a snapshot of the filesystem with something like
mylvmbackup. Then restore is as fast as copying the files over.


Also mysqldump adds disable keys and disable unique checks to the
output by default when using the --opt flag.

-Eric

On 5/20/07, Ray Cauchi [EMAIL PROTECTED] wrote:

Have you tried using the

  use db
  source /path/to/dumpfile.sql

command via the mysql command line client?

At 08:28 AM 5/20/2007, Vitaliy Okulov wrote:
Здравствуйте, mysql.

How i can speedup restore of mysql DB from file created by mysqldump?

MySQL is quite fast at reloading data, but if
you don't have enough memory or if
key_buffer_size is not set high enough, then it
can take days to re-index the data. You need to
stuff as much RAM as possible in your machine
and set key_buffer_size to at least 30% of your
machine RAM. I've had indexed take 2+ days to
rebuild and adding more RAM and tweaking
key_buffer_size allowed the same keys to be rebuilt in under an hour.

Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com


Re: max mysql under windows

2007-02-08 Thread Eric Bergen

Hi Charles,

The manual has a few options for running multiple mysql instances on
windows. See:
http://dev.mysql.com/doc/refman/5.0/en/multiple-windows-servers.html


On 2/5/07, Brown, Charles [EMAIL PROTECTED] wrote:

Hello All.



Is it possible to setup two instances of MySQL under windows?  I am
wondering because I would like to setup and test replication



Thanks

Charles









This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT IGNORE BUG?

2007-02-08 Thread Eric Bergen

Hi Ed,

Can you please reply with a repeatable test case?

On 2/1/07, Ed Pauley II [EMAIL PROTECTED] wrote:

I am importing a file into a table in which I want it to ignore
duplicate rows. When I specify --ignore (this also happens if I do a
SELECT IGNORE INTO from the client also) I get a duplicate key error. If
I run the command again it skips the first such instance of a duplicate
key and gives me an error (and exits) for the next instance of a
duplicate key. If I run the command over and over it finally goes
through the whole file. Then the fun starts over. The primary key is on
9 columns but the index shown in the error only has 6 of the columns
listed. Is this a bug?

I am running ver. 5.0.27 on Linux.

--
Ed Pauley II
[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: bug

2006-12-18 Thread Eric Bergen

Ajay,

To lookup the string for an error code use the perror utility:
$ perror 127
MySQL error code 127: Record-file is crashed

Try running repair table.

See http://dev.mysql.com/doc/refman/5.0/en/repair.html for more details.

-Eric

On 12/17/06, ajay roy [EMAIL PROTECTED] wrote:

i am getting the error somthing like that

GET ERROR(127) TABLE HANDLER PROBLEM

i am sorry that i coulde not take mysqlbug script

thanks


ajay,hyderabad,india

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Sequence ID generation transaction-safe?

2006-12-16 Thread Eric Bergen

Hi Frederic,

Update then select on a single row table is transaction safe. If two
users start a transaction and issue update queries the first query to
execute will set a lock on that row. The second query will then block
on the update waiting to obtain the same lock. In innodb row locks are
not released until a transaction commits. This means that the
transaction with the successful update can then issue another select
to fetch the new row id while the first transaction is still waiting
on the row lock. When the first transaction commits the row lock will
be freed allowing the section transaction to obtain the lock, update,
and select the next number in the sequence.

I'm curious why you are using a sequence table to generate unique ids
for another table. Why not just change the other table to have an
auto_increment primary key and a secondary unique key to replace the
current primary key?

Innodb uses a special mechanism to allocate auto_increment ids that is
much faster than a sequence table...


-Eric


On 12/16/06, Frederic Wenzel [EMAIL PROTECTED] wrote:

Hi,

I am using a sequence table as suggested in
http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#id2887015

in order to generate new, unique IDs for a table where I have a
primary key spanning multiple columns and still need a unique numeric
part without being able to use an auto-increment table.

I am using UPDATE translations_seq SET id=LAST_INSERT_ID(id+1) and
then I fetch my newest ID with select id from translations_seq.


While this method is described in the manual as multi-user safe I
was wondering if this was also transaction safe? When two users start
a transaction at the same time (and don't commit it yet) will they get
different IDs? Because the UPDATE statement will not take place yet
until COMMITing it, I am unsure if it will actually hand out different
IDs for both of the transactions.

Can anyone enlighten me?

Thanks
Fred

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why innodb can give the same X gap lock to two transactions?

2006-12-16 Thread Eric Bergen

Which version of mysql is this?

In 5.1.12 when I run your test the section transaction blocks waiting
for the lock (as it should). My show innodb status output is:


TRANSACTIONS

Trx id counter 0 1300
Purge done for trx's n:o  0 1288 undo n:o  0 0
History list length 1
Total number of lock structs in row lock hash table 2
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 1284, not started, process no 23890, OS thread id 1116363696
MySQL thread id 2, query id 25 localhost root
---TRANSACTION 0 1299, ACTIVE 21 sec, process no 23890, OS thread id
1116765104 starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 3201 row lock(s)
MySQL thread id 5, query id 58 localhost root statistics
select * from test where id=6 for update
Trx has approximately 1 row locks
--- TRX HAS BEEN WAITING 21 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 52 n bits 80 index `PRIMARY` of table
`test`.`test` trx id 0 1299 lock_mode X locks rec but not gap waiting
Record lock, heap no 7 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
0: len 4; hex 8006; asc ;; 1: len 6; hex 0510; asc
  ;; 2: len 7; hex 80002d0110; asc -  ;; 3: len 20; hex
6875616e676a7920202020202020202020202020; asc huangjy ;;




On 12/11/06, leo huang [EMAIL PROTECTED] wrote:

Hi, all,

We have an innodb table named test. It has some rows as follow:
mysql show create table test;
+---+-+
| Table | Create Table

|
+---+-+
| test  | CREATE TABLE `test` (
  `id` int(11) NOT NULL default '0',
  `name` char(20) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+---+-+
1 row in set (1.75 sec)

mysql select * from test;
++-+
| id | name|
++-+
|  1 | huangjy |
|  2 | huangjy |
|  3 | huangjy |
|  4 | huangjy |
|  5 | huangjy |
|  7 | huangjy |
|  8 | huangjy |
|  9 | huangjy |
++-+
8 rows in set (1.98 sec)

When I start two transactions as follow:

Transaction 1:
mysql begin;
Query OK, 0 rows affected (2.51 sec)

mysql select * from test where id=6 for update;
Empty set (2.17 sec)

Transaction 2:
mysql begin;
Query OK, 0 rows affected (1.56 sec)

mysql select * from test where id=6 for update;
Empty set (2.27 sec)

Now, I use show engine innodb status to see the innodb lock status.
The output as follow:


TRANSACTIONS

Trx id counter 0 5168907
Purge done for trx's n:o  0 5168898 undo n:o  0 0
History list length 2
Total number of lock structs in row lock hash table 2
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0 0, not started, process no 30668, OS thread id 2484620208
MySQL thread id 2, query id 46 localhost root
show engine innodb status
---TRANSACTION 0 5168906, ACTIVE 83 sec, process no 30668, OS thread
id 2484820912
2 lock struct(s), heap size 320
MySQL thread id 1, query id 45 localhost root
TABLE LOCK table `test/test` trx id 0 5168906 lock mode IX
RECORD LOCKS space id 0 page no 54 n bits 80 index `PRIMARY` of table
`test/test` trx id 0 5168906 lock_mode X locks gap before rec
Record lock, heap no 8 PHYSICAL RECORD: n_fields 4; 1-byte offs TRUE;
info bits 0
 0: len 4; hex 8007; asc ;; 1: len 6; hex 004eb50c; asc
N  ;; 2: len 7; hex 008013285c; asc  (\;; 3: len 20; hex
6875616e676a7920202020202020202020202020; asc huangjy ;;

---TRANSACTION 0 5168905, ACTIVE 120 sec, process no 30668, OS thread
id 2484419504
2 lock struct(s), heap size 320
MySQL thread id 3, query id 43 localhost root
TABLE LOCK table `test/test` trx id 0 5168905 lock mode IX
RECORD LOCKS space id 0 page no 54 n bits 80 index `PRIMARY` of table
`test/test` trx id 0 5168905 lock_mode X locks gap before rec
Record lock, heap no 8 PHYSICAL RECORD: n_fields 4; 1-byte offs TRUE;
info bits 0
 0: len 4; hex 8007; asc ;; 1: len 6; hex 004eb50c; asc
N  ;; 2: len 7; hex 008013285c; asc  (\;; 3: len 20; hex
6875616e676a7920202020202020202020202020; asc huangjy
 ...

As you can see, TRANSACTION 0 5168906 and TRANSACTION 0 5168905 both
get the X gap locks on the same record. The MySQL Manual said that X
lock is an exclusive lock. Why two transactions can get the same X
lock?

Any comment will be welcomed?

Best regards,
Leo Huang

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

Re: Help understanding the whole password issue

2006-11-28 Thread Eric Bergen

Mathieu,

The old_passwords option only changes how mysqld generates new
passwords. If old_passwords=0 when you create a new user mysql will
generate a long password that will only accept clients using the newer
protocol (and client lib). When old_passwords=1 mysql will generate
the older shorter passwords that will accept clients using the old
protocol (and client lib)

It's possible to have some clients use the old protocol and some using
the new protocol by changing their passwords accordingly. This is what
the old_password function is for in both mysql and phpmyadmin.

-Eric

On 11/28/06, Mathieu Bruneau [EMAIL PROTECTED] wrote:

I may not be totally right but :

1) Well it all depends of which client library they are using if they
are using the old library yes

2) Well if that client use the old libraby yes ... (btw you can check
the password field in the mysql.user table to view the difference =
they have a different format, newer one are much longer and start with
an * so you know which user uses the new and the old scheme)

3) No '%' means everything except localhost... That's a special case in
mysql :)

Regards,

--
Mathieu Bruneau
aka ROunofF

===
GPG keys available @ http://rounoff.darktech.org


Joey a écrit :
 I need some clarification on the old password issue as I am having problems
 logging in.


 Here is what I am running into now.

 Original Server config:
 my.cnf is NOT using the old_passwords=1 under MySQL-server-4.1.14-0
 I am able to login via phpmyadmin without problem, remotely with mysqladmin
 and of course applications.
 perl-DBD-MySQL-2.9003-5
 MySQL-devel-4.1.14-0
 MySQL-server-4.1.14-0
 MySQL-client-4.1.14-0
 php-mysql-4.3.11-2.8.4.legacy
 mod_auth_mysql-20030510-5
 MySQL-shared-compat-4.1.13-0
 MySQL-embedded-4.1.14-0
 MySQL-bench-4.1.14-0



 New Server config:
 tried old_passwords=1 both on  off under mysql-4.1.20-1.RHEL4.1
 libdbi-dbd-mysql-0.6.5-10.RHEL4.1
 mysql-4.1.20-1.RHEL4.1
 mod_auth_mysql-2.6.1-2.2
 php-mysql-4.3.9-3.22
 mysql-bench-4.1.20-1.RHEL4.1
 mysql-server-4.1.20-1.RHEL4.1
 mysqlclient10-3.23.58-4.RHEL4.1
 perl-DBD-MySQL-2.9004-3.1
 MySQL-python-1.0.0-1.RHEL4.1
 mysql-devel-4.1.20-1.RHEL4.1

 New server tried running with both the old_passwords=1 and not really
 getting consistant results.
 yes I restart mysqld after changes etc.  Part of the problem may also be
 related to the host field in which some times I use the % so that I can
 connect via the gui tools.

 So are my questions:

 1. if I want to use the older apps, and phpmyadmin etc do I need to run
 old_passwords=1?

 2. If I start mysqld with the old_passwords=1 set, when I use phpmyadmin to
 edit someone's password do I need to use the password drop down or the
 old_password drop down?

 3. For remote access if I setup % in the host as I have done in the past
 that should allow me to login both remotely  to localhost right?
 ( again this works perfectly on my older box)


 Hope this makes sense.

 Thanks,

 Joey





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Crash RECOVERY HELP (Urgent)

2006-09-21 Thread Eric Bergen

The error message says to go to
http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html to learn
how to set the different recovery options for innodb.

On 9/21/06, Sayed Hadi Rastgou Haghi [EMAIL PROTECTED] wrote:

Dear all,
our DB server crashed and when I try to start Mysql
/etc/init.d/mysql/start
I get these lins in my error log

060921 13:00:14  mysqld started
060921 13:00:14  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
060921 13:00:14  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 3 3546172175.
InnoDB: Error: tried to read 65536 bytes at offset 0 2173440.
InnoDB: Was only able to read 54784.
InnoDB: Fatal error: cannot read from file. OS error number 17.
060921 13:01:24InnoDB: Assertion failure in thread 3086943936 in file
os0file.c line 2107
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=0
read_buffer_size=536866816
max_used_connections=0
max_connections=550
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
2094947 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbffe3d4c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x814cbfb
0x8d28b8
0x1
0x8355aed
0x835c659
0x835ce73
0x829ba01
0x81d3af3
0x81c5cb2
0x815028a
0x773e33
0x80e0c71
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
060921 13:01:24  mysqld ended

when I add skip-innodb in my.cnf, it startsup but my innodb tables could not
be accessed.

How can I start MySQL server again?

--
Sincerely,
Hadi Rastgou
A Google Account is the key that unlocks the world of Google.
a href= http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1; Get
Firefox! /a





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Who's locking ?

2006-07-29 Thread Eric Bergen

Marco,
  It is also possible to set table locks by hand using a lock table
query. These connections will simply show up as sleeping or doing
something else. There currently isn't a show locks command in MySQL.
Depending on what the process list looks like tracking it down could
be as simple as looking for the sleeping connection that has been
asleep for about as long as the longest locked select. If it's locking
tables then doing something else it may be more difficult.


On 7/29/06, Marco Simon [EMAIL PROTECTED] wrote:

Hello list,

I've a question about understanding table-locks of myisam-tables:

From time to time it happens that my proccesslist gets flooded by
Queries, that
are waiting for a locked table. Quickly there are 100-500 waiting
Queries in the
queue - some of them waiting since more than 1000 seconds.

My question is: How can I see who (which query) set the lock, which all
other
queries are waiting for to be released ? I thought there should be at
least one
update Query in the processlist, which set the lock and which is still
working.
But there are only locked queries for the locked table - no updates, no
deletes.

Could somebody try to explain, how I can investigate, who's blocking the
line ?









--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Looking For How test database performans on different hardware

2006-07-29 Thread Eric Bergen

Vahric,
 There are so many variations between different hardware platforms
that can affect performance on your specific application that it's
better to run benchmarks on the actual hardware if you have it. If not
then Peter Zaitsev's performance blog is the next best resource.
http://www.mysqlperformanceblog.com/category/hardware/

On 7/27/06, Vahric MUHTARYAN [EMAIL PROTECTED] wrote:

Hello ,

I want to test Mysql , on Dual Intel Xeon 2MB Cache CPU and Dual AMD Opteron
platforms for looking differents which one is better then other !
I found something but I want to asl to list , is there anybody have an
experiance about this type tests !

Regards

Vahric MUHTARYAN
Sistem Mühendisi/System Engineer
DorukNet
Tel / Phone : +90 212 326 92 00
Fax : +90 212 227 28 11
E-mail : [EMAIL PROTECTED]
http://www.doruk.net.tr
http://www.doruk.net.tr/english.html

Bu e-posta mesajı kişiye özel olup, gizli bilgiler içeriyor olabilir. Eğer
bu e-posta mesajı size yanlışlıkla ulaşmışsa, e-posta mesajını kullanıcıya
hemen geri gönderiniz ve mesaj kutunuzdan siliniz. Bu e-posta mesajı, hiç
bir şekilde, herhangi bir amaç için çoğaltılamaz, yayınlanamaz ve para
karşılığı satılamaz. Yollayıcı, bu e-posta mesajının - virüs koruma
sistemleri ile kontrol ediliyor olsa bile - virüs içermediğini garanti etmez
ve meydana gelebilecek zararlardan doğacak hiçbir sorumluluğu kabul etmez.
The information contained in this message is confidential, intended solely
for the use of the individual or entity to whom it is addressed and may be
protected by professional secrecy. You should not copy, disclose or
distribute this information for any purpose.
If you are not the intended recipient of this message or you receive this
mail in error, you should refrain from making any use of the contents and
from opening any attachment. In that case, please notify the sender
immediately and return the message to the sender, then, delete and destroy
all copies.
This e-mail message has been swept by anti-virus systems for the presence of
computer viruses. In doing so, however, we cannot warrant that virus or
other forms of data corruption may not be present and we do not take any
responsibility in any occurrence.






--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com


Re: udf configuration resources

2006-07-20 Thread Eric Bergen

Your udf is executing in a threaded environment so you have the option
of creating a datastrucutre to store options in that will persist
across udf executions. However since you won't be deallocating this
structure ever it's essentially leaked memory.

What are you doing with udfs that you want to store configuration for?
I have a feeling udfs are not he right solution.

On 7/19/06, Yong Lee [EMAIL PROTECTED] wrote:

Hi all,



I hope I have hit the right group for this question.



I would like to create a UDF that is configurable at run time.  Similar to
how mysql can use variables defined in the /etc/my.cnf file, I'd like my UDF
to make use of configuration parameters that can be set at run time.



I'm wondering if this is possible and what strategy to take to implement
this, ie: a mechanism to have mysql or the UDF read something once and then
have the UDF able to refer back to it everytime it is run.



I know I could read a file in the _init function, but this seems very
wasteful to read a file every time the function is being used.



Any thoughts on this would be appreciated.



thanks,

Yong.






--
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: key_buffer_size vs innodb_buffer_pool_size

2006-01-15 Thread Eric Bergen
The difference in recommendation size comes from the different
techniques each storage engine uses for caching data. myisam
(key_buffer_size) only stores indexes where innodb_buffer_pool_size
stores both indexes and data. mysiam relies on the operating system to
cache data in ram which is why you don't want to use all available
memory for the key buffer.


On 1/14/06, Grant Giddens [EMAIL PROTECTED] wrote:
 Hi,

 After reading through the example my.cnf files (large, huge,  etc), I 
 started to wonder what the difference was between the isam  key_buffer_size 
 and the innodb innodb_buffer_pool_size.

   I realize that they are two different table types, but some of the docs  
 says to set the key_buffer_size to 25%-50% of the overall system  memory.  
 The comments for the innodb_buffer_pool_size say that it  can be set to 
 50%-80% of the overall system memory.

   Maybe I don't understand exactly the difference between the two because  I 
 don't understand why they have different memory recommendations.

   Is there any FAQs on the my.cnf file?  How would you set these two  
 variables if you had an even mix of isam and innodb tables?  Where  can I 
 learn more about tweaking the my.cnf file?  The mysql online  documentation 
 is good, but I need a more basic description of these two  variables and all 
 the other my.cnf settings.

   Thanks,
   Grant



 -
 Yahoo! Photos
  Got holiday prints? See all the ways to get quality prints in your hands 
 ASAP.



--
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How can you tell if a table is corrupted?

2005-12-11 Thread Eric Bergen
If mysql detects corruption it will stop further access to the table
until you repair it. It is possible for corruption to happen where
mysql can't detect it. If you suspect corruption run a repair on the
table.

On 12/11/05, Subscriptions [EMAIL PROTECTED] wrote:
 Hi all,
 Say, is there a way to tell if a table has been corrupted?  We're having some 
 weird things happening and the only thing I can think of is possible 
 corruption of a table, but is there anything you can do to find out?

 Jenifer




--
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: recursive queries

2005-12-09 Thread Eric Bergen
I believe that some time in the future mysql will support oracle style 
connect by prior syntax but it's not implemented yet.


Gleb Paharenko wrote:


Hello.

Though I haven't read this article, but its title looks promicing:
 http://dev.mysql.com/tech-resources/articles/hierarchical-data.html



Duncan Miller wrote:
 


I am looking to do a query on a self join table that returns the parent
records.

Obviously there are ways to do this, but just wondered if there are any
functions in MySQL that make this easier, or specific functions
available to stored procedures. I have read the manual and couldn't find
anything, but thought I would ask.

I notice that the new MS SQL Server has support for recursive queries
and oracle also, so I thought maybe...

Thanks



   




 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimize: 14 hours and still running!

2005-12-09 Thread Eric Bergen

Is the box swapping?

Gleb Paharenko wrote:


Hello.

As was mentioned by other members without seeing your configuration
settings it is difficult to say about InnoDB performance. You can
indirectly monitor the OPTIMIZE speed by ROW OPERATIONS section of SHOW
INNODB STATUS. For InnoDB it maps to ALTER TABLE, which rebuilds
the table.




Nathan Gross wrote:
 


On a 1.6ghz, 1gb ram, Linux machine running Mysql 4.1x.
 I have an Innodb table with over 20 million records and index size
about 3.7 gig, data size 2.2gig (yes, many indexes, more space then
the data itself). Last night I tried an Optimize from the Admin gui
console (logged in as root at the host server), and the thing is still
running!
Problem is I need to leave early this evening and have to take some action.

The Linux 'top' utility has it on the top since then at about 11%-18%
cpu Disk activity is continuously heavy.

1. How long should it take?

2. If I hit cancel will it:
a) Roll back what it did, another 14 hours!
b) Just stop as if nothing happened.
c) The table will be partially optimized and will run normally.
d) hang the process and/or machine.

3. Is the data in jeopardy?

Thank you all.
-nat

   




 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: IP Resolution

2005-10-14 Thread Eric Bergen
MySQL 4.1.14 is the current version. You should always upgrade to the 
lastest release and test your problem before trying to report bugs.


Ben Clewett wrote:


Dear MySQL,

My MySQL 4.1.9 has lost the ability to work out what IP address 
clients are connecting from.  Eg:



$ mysqladmin  processlist
++--+---+--+-+--+---+--+ 

| Id | User | Host  | db   | Command | Time | State | 
Info|
++--+---+--+-+--+---+--+ 


| 5  | test | 0.0.0.0:55049 | test | Sleep   | 10   |   ||
| 6  | root | localhost |  | Query   | 0|   | show 
processlist |
++--+---+--+-+--+---+--+ 




This is annoying as I can't authenticate users based on their IP address.

I suspect this may be a clash between IPv6 and IPv4.  It happened 
after a patch was applied to the AIX operating system and MySQL 
recompiled.


Would any members have experience or ideas of how this problem may be 
resolved?


Regards,

Ben





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Drop Table Problem

2005-09-28 Thread Eric Bergen
Try running a flush tables in mysql to make sure that mysqld knows about 
all the tables on disk.


xtcsuk wrote:


Running the command:
Drop table if exists table1
complaints of table1 does not exist (Error: 1051).
However, if table1 is swapped with another table, irrespective of its
existence it works just fine.

Running a Show Table command lists table1 as an existing table and
trying to create table1 complaints of this table already eixsts.

Please advise.

regards

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing query WHERE date0

2005-09-08 Thread Eric Bergen
When you add that index are more than 30% of the rows in the table 
DateTimeNext1126215680?


Dan Baker wrote:


I have lots of tables that are similar in nature:

id int(11) PRI NULL auto_increment
Name varchar(30)
DateTimeNext int(11)

The DateTimeNext field represents when this records needs attention.  A 
value of zero indicates it is being ignored.  There are times when *lots* of 
records DateTimeNext values will be zero.


I want to find all records in the database that need attention today, so a 
typical query looks like:

SELECT id,Name FROM tbl WHERE DateTimeNext1126215680

When I EXPLAIN this query, I get the following:
  table type possible_keys key key_len ref rows Extra
  Site, ALL, NULL,NULL, NULL, NULL, 53587,Using where

If I add an index for DateTimeNext, the EXPLAIN shows:
  table type possible_keys key key_len ref rows Extra
  Site,ALL,DateTimeNext,NULL,NULL,NULL,53587,Using where

It appears that the index does NO good in this query.
Is there anything I can do to optimize this query? Alter the table to 
improve the query? Do anything to not scan the entire stinkin' table?


Thank you,
DanB




 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Anyone knows what happens at mysqladmin startup time?

2005-09-08 Thread Eric Bergen

It appears that mysqld is not listening on /tmp/mysql.sock.

Log into mysql and run show variables like 'socket';
This will tell you where mysqld is actually listening for socket 
connections.
Then update your my.cnf files so they all have the same sock file 
location as the

one that mysqld is listening on.

$B2@E(B wrote:


hi all, here is the suituation
i've installed mysql4.1.14 under the /usr/local directory. the problem
is, the server can work properly, however, mysqladmin doesn't. i
posted it yesterday, and someone suggested me to issue the following
command, however the same error 111 occured.

#mysqladmin -uroot password mypwd
#error: can't connect to local MySQL server through socket
'/tmp/mysql.sock'(111)

but, the server is *really* running. 
#ps -e | grep mysqld shows mysqld_safe and several other mysqld threads.


i'm wodering if mysqladmin requires parsing some .rc files(if any) or
interpreting some environment variables(if any) but they have wrong
settings or values?

any hits or help or advice would be most appreciated.

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: log event entry exceeded max_allowed_packet

2005-09-08 Thread Eric Bergen
packet too big errors in replication are often a sign of corrupt 
binary logs. If it's on a slave reading the relay log
often times flushing it will temporarily solve the problem. The easiest 
way to flush the relay logs is to do a slave stop; change master to back 
to the current file name and position. MySQL will delete all the current 
relay logs, open new ones and redownload any unprocessed events from the 
master.


Gleb Paharenko wrote:


Hello.

In my opinion (it is based on looking through the source code, but I
can be wrong) - the max_allowed_packet should be at least more than
any event in the master's binary log. The tool for researching 
binary logs is mysqlbinlog utility. See:

 http://dev.mysql.com/doc/mysql/en/mysqlbinlog.html
http://dev.mysql.com/doc/mysql/en/Binary_log.html

Your versions of MySQL are very old, I recommend you to upgrade, lots of
bugs have been fixed since these releases.


Jacob Friis [EMAIL PROTECTED] wrote:
 


Is there any way I can find out how big I should set max_allowed_packet?

I have changed it from the default 1M to 32M, but I still get that error.
Or is it a bug? We use MySQL 4.0.12 on the slave and 4.0.15-max on the mast=
er.
How big should I set max_allowed_packet? The servers are dual Pentium
with 2G ram.

Thanks for any help.
Jacob

   




 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Doubt about query optimization

2005-07-27 Thread Eric Bergen
Can you send us the actual show indexes from table and explain output 
that isn't shortend?


Ronan Lucio wrote:


Hello,

I´m using MySQL-4.1.7, and I have a query that seems,
even using where clause, an specific table isn´t being optimized
properly:

I have four tables:

real_state: cod, name, city, ag_cod, sell
agency: cod, name, ag_status
front: cod, rs_cod, ag_cod
photo: cod, rs_cod

These table have the following indexes:

real_state:
   cod (pri)
   city
   ag_cod
agency:
   cod
   name
front:
   cod
   rs_cod
   ag_cod
photo
   cod
   rs_cod

When I EXPLAIN the query:

EXPLAIN
SELECT front.rs_cod, photo.cod, real_state.descr
FROM real_state, agency, front, photo_foto
WHERE real_state.city = 1
AND real_state.sell = 1
AND front.rs_cod = real_state.cod
AND photo.rs_cod = front.rs_cod
AND agency.cod = real_state.ag_cod
AND ag_status = 'A'

It shows me (in a short):

tablekeyrows
======  
frontrs_cod   2085
real_statecod1
agencycod1
photo  rs_cod   1

But the trouble is: If I execute:

SELECT COUNT(*)
FROM real_state, front
WHERE real_state.city = 1
AND real_state.cod = front.rs_cod

Returns: 271

So, Why the index front.rs_cod isn´t being used?

Any help would be appreciated,

thank you,
Ronan





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Insert with prefix

2005-07-27 Thread Eric Bergen

Try this:
concat('UP', lpad(category_id, 6, '0'));

Scott Purcell wrote:


Hello,

I have the following dilemma, that I do not know how to handle.

I have the need for a table where I create a identifier. The identifier consists of a two character string, eg: UP + the next AUTO_INCREMENT 
Here is what I have.


  category_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,

Table should look like this:
UP01
UP02
UP03

So the problem is, how can I grab the auto_increment number, then append the 
prefix while doing an insert?

Can anyone give me some advice on this problem?

Thanks,
Scott

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Hour counts

2005-07-27 Thread Eric Bergen
This does make his code fall under the limitations of unix timestamps. 
In 30 years or so when we are all retired millionaires ;) some poor 
intern is going to have to figure out why the hour diff calculation is 
failing.



[EMAIL PROTECTED] wrote:


Gyurasits Zoltán [EMAIL PROTECTED] wrote on 07/27/2005 12:57:42 PM:

 


TIMESTAMP() is available as of MySQL 4.1.1.

I can't use this version because replication working :(

I use version 4.0.22

- Original Message - 
From: Martijn Tonies [EMAIL PROTECTED]

To: Gyurasits Zoltán [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Wednesday, July 27, 2005 6:31 PM
Subject: Re: Hour counts


   


Hi,

Check out the  TIMEDIFF  function.

With regards,

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle  MS 
 


SQL
 


Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com

I would like to calculate the hour counts from 2 'datetime'.
Example:   2005-07-27 18:00 and 2005-07-27 19:30  = 1,5 hour

I try this  but not good!

R1 : munkaido_end-munkaido_start  /simple substract/
R2 : ROUND(ROUND((end-start)/1)+
(((end-start)/1000-(ROUND((end-start)/1)*10))/6),1)  /good if is 
 

in 
 


one
day/
R3 : ROUND((end-start)/1)-76  /-76 because from 14. to 15. I don't
understand/


start   end  R1R2 R3
07-14 15:00 07-14 17:30 23000   2.5 -74
07-14 23:00 07-15 01:30 783000 78.5 2
07-14 15:00 07-15 02:30 873000 87.5 11
07-14 15:00 07-14 16:00 1   1 -75

Please help me...(exist a function for this situation?)


Tnx!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

 



Then your next option is to convert your datetime values to seconds since 
epoch by using UNIX_TIMESTAMP() (see the same link everyone else has 
already given you for details).


mysqlSELECT (unix_timestamp('2005-07-27 19:30:00') - 
unix_timestamp('2005-07-27 18:00:00'))/3600;

+--+
| (unix_timestamp('2005-07-27 19:30:00') - unix_timestamp('2005-07-27 
18:00:00'))/3600 |

+--+
|1.50 |
+--+
1 row in set (0.00 sec)


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: table export problem

2005-07-20 Thread Eric Bergen

Which version of mysql are you exporting from and inserting into?

[EMAIL PROTECTED] wrote:


Hi all,

I am trying to create a table on the remote server from a table I created on my local sever but it never seems to 
work


CREATE TABLE `sheet1` (
 `id` int(10) NOT NULL auto_increment,
 `title` varchar(255) NOT NULL default '',
 `fname` varchar(255) NOT NULL default '',
 `sname` varchar(255) default NULL,
 `job_title` varchar(255) default NULL,
 `organisation` varchar(255) default NULL,
 `email` varchar(255) default NULL,
 `street` varchar(255) default NULL,
 `city` varchar(255) default NULL,
 `postcode` varchar(255) default NULL,
 `office_tel` varchar(255) default NULL,
 `mobile` varchar(255) default NULL,
 `fax` varchar(255) default NULL,
 `web` varchar(255) default NULL,
 `add_info` varchar(255) default NULL,
 PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=303 ;


There seems to be a problem with the last line (this is exported from my 
local server). I am just learning about mySql as I go so have no real clue 
about CHARSET and ENGINE (which I believe may be the problem)


This is the error

1064 - You have an error in your SQL syntax.  Check the manual that 
corresponds to your MySQL server version for the right syntax to use near 
'DEFAULT CHARSET=latin1 AUTO_INCREMENT=303' at line 18


and this is what the manual  says (not very helpful)

a.. Error: 1064 SQLSTATE: 42000 (ER_PARSE_ERROR)

Message: %s near '%s' at line %d


Any help will be appreciated.

R.
 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql crash due to page corruptions

2005-07-13 Thread Eric Bergen

Please upgrade to the newest 4.0 mysql binaries.


Anil wrote:

Hi, 




We are using mysql 4.0.20 on RHEL3.0 with circular replication setup A -B
-C -A . A is the master and all operations will be happening on A. We are
facing frequent mysql crash on Master with page corruption errors. How to
identify which process is causing this page corruption. After restarting the
mysql it is working fine. But we want to identify the process which is
causing page corruption ,Any clues will be helpful for us.



Thanks

Anil

DBA


 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subselect in an Update query

2005-06-17 Thread Eric Bergen

Could you accomplish this with an update and self join?

Ed Reed wrote:


Sorry, I thought it was easy to understand.

I wanna update a field in a table with a value from the same field but from a different record of the same table. 
For example, using the query in my original message,


+--+
|RecID |FIELD1 |FIELD2 |
+--+
|  1   | ABC   | A Value   |
+--+
|  2   | DEF   | Some Value|
+--+
|  3   | GHI   | Another Value |
+--+
|  4   | JKL   | More Values   |
+--+




 


Update table1 Set field1=(Select field1 From table1 Where field2=Some Value)
Where field2 =Another Value;
   



This query should set FIELD1 of Record 3 to 'DEF'

+--+
|RecID |FIELD1 |FIELD2 |
+--+
|  1   | ABC   | A Value   |
+--+
|  2   | DEF   | Some Value|
+--+
|  3   | DEF   | Another Value |
+--+
|  4   | JKL   | More Values   |
+--+





That's it!  Should be easy but I get an error that says You can't specify target 
table 'table1' for update in FROM clause

- Thanks


 


[EMAIL PROTECTED] 6/17/05 11:03:40 AM 
   


Ed Reed  [EMAIL PROTECTED]  wrote on 06/17/2005 01:35:40 PM:

 


Can anyone tell me how I can make this work or suggest a work around?
   



 

Update table1 Set field1=(Select field1 From table1 Where field2=Some 
   


Value)
 


Where field2 =Another Value;
   



 


Thanks
   



OK, your example must be broken (or it wouldn't be here) and since that's 
all you posted I have a hard time divining your intent. Would you please 
provide some sample data and explain what it is you want to do _in words_? 
I can't speak for everyone but I know it would sure help me figure out 
what you want to do.


Thanks!

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT DELAYED and NOW()

2005-06-10 Thread Eric Bergen
More queries yes but not more disk i/o. The first query will never touch 
a disk.


[EMAIL PROTECTED] wrote:


Eric Bergen [EMAIL PROTECTED] wrote on 06/09/2005 12:56:59 PM:

 


How about something like this:
   



 


mysql select @t := now();
+-+
| @t := now() |
+-+
| 2005-06-09 09:55:49 |
+-+
1 row in set (0.00 sec)
   



 


mysql insert delayed into t set t = @t;
Query OK, 1 row affected (0.00 sec)
   



 


mysql select * from t;
+-+
| t   |
+-+
| 2005-06-09 09:55:49 |
+-+
1 row in set (0.01 sec)
   



 


This way you get the current time of the call and it doesn't matter how
long the insert delayed sits for.
   



 


Jochem van Dieten wrote:
   



 


On 6/9/05, Jeremiah Gowdy wrote:


 


Does this seem to break SQL / application logic in some fashion?


Not worse then it is currently broken :)

According to the SQL standard CURRENT_TIMESTAMP, which in MySQL is a
synonym for NOW(), is supposed to have a value that does not change
during a transaction. At which point during the transaction that 
 


value
 


is 'sampled' is implementation defined. (ISO/IEC 9075:2003-2 section
6.31)

Since both NOW() and INSERT DELAYED are MySQL extensions I don't
particularly care how they behave/interfere, but I would prefer any
solution/hack not to complicate MySQL ever becomming standard
compliant in this regard (and standard compliance is an official
goal).


 


Does the standard specify when the timestamp is evaluated?


   


During the transaction.




 

I agree that it might be better for it to be a seperate function, but 
   


since
 

DELAYED isn't part of the standard, I'm not sure there's anything that 
   


keeps
 

an implementation from evaluating the CURRENT_TIMESTAMP for a query 
   


upon
 

receipt of the query from the network, rather than when the SQL 
   


statement is
 


evaluated.


   


Let me reiterate:
Since both NOW() and INSERT DELAYED are MySQL extensions I don't
particularly care how they behave/interfere.




 


If I wrote a SQL server from scratch, would this not
be a valid implementation, to timestamp upon network receive of a 
   


complete
 

query, rather than upon finding the CURRENT_TIMESTAMP (or NOW()) 
   


function
 


while parsing a query?


   


That depends on some more implementation issues: perceivably your
network receive could even be before the start of the transaction.
Evaluate CURRENT_TIMESTAMP only once per transaction, between the
start of the transaction and the end of the transaction.

Jochem



 



The problem with that is that you have just doubled the query count at the 
central logging server. That's a lot of traffic it can probably do 
without. 

I like the QNOW() approach. (Use an extension, the new function, to deal 
with a side effect of an extension, DELAYED. It's a universal balance kind 
of thing.) 

Some alternative names: QUEUEDNOW(), QUEUEDTIMESTAMP(), RECEIVEDTIME(), 
RECEIVEDTIMESTAMP(), ARRIVALTIMESTAMP()



Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT DELAYED and NOW()

2005-06-09 Thread Eric Bergen

How about something like this:

mysql select @t := now();
+-+
| @t := now() |
+-+
| 2005-06-09 09:55:49 |
+-+
1 row in set (0.00 sec)

mysql insert delayed into t set t = @t;
Query OK, 1 row affected (0.00 sec)

mysql select * from t;
+-+
| t   |
+-+
| 2005-06-09 09:55:49 |
+-+
1 row in set (0.01 sec)


This way you get the current time of the call and it doesn't matter how 
long the insert delayed sits for.


Jochem van Dieten wrote:


On 6/9/05, Jeremiah Gowdy wrote:
 


Does this seem to break SQL / application logic in some fashion?
 


Not worse then it is currently broken :)

According to the SQL standard CURRENT_TIMESTAMP, which in MySQL is a
synonym for NOW(), is supposed to have a value that does not change
during a transaction. At which point during the transaction that value
is 'sampled' is implementation defined. (ISO/IEC 9075:2003-2 section
6.31)

Since both NOW() and INSERT DELAYED are MySQL extensions I don't
particularly care how they behave/interfere, but I would prefer any
solution/hack not to complicate MySQL ever becomming standard
compliant in this regard (and standard compliance is an official
goal).
 


Does the standard specify when the timestamp is evaluated?
   



During the transaction.


 


I agree that it might be better for it to be a seperate function, but since
DELAYED isn't part of the standard, I'm not sure there's anything that keeps
an implementation from evaluating the CURRENT_TIMESTAMP for a query upon
receipt of the query from the network, rather than when the SQL statement is
evaluated.
   



Let me reiterate:
Since both NOW() and INSERT DELAYED are MySQL extensions I don't
particularly care how they behave/interfere.


 


If I wrote a SQL server from scratch, would this not
be a valid implementation, to timestamp upon network receive of a complete
query, rather than upon finding the CURRENT_TIMESTAMP (or NOW()) function
while parsing a query?
   



That depends on some more implementation issues: perceivably your
network receive could even be before the start of the transaction.
Evaluate CURRENT_TIMESTAMP only once per transaction, between the
start of the transaction and the end of the transaction.

Jochem

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ORDER BY and ENUM -- not alphabetical

2005-06-08 Thread Eric Bergen

It's not a bug at all. You just hit one of the features of enum :)

If you want to order alphabetically as you describe cast the enum name 
to a string like this

select col from t order by concat(my_enum);

-Eric

Daevid Vincent wrote:


Please tell me there is a way to fix this bug in mysql  Ver 12.22 Distrib
4.0.18, for pc-linux-gnu (i686)

I have a column defined like so:
Type
enum('Schedule','Report','Admin','Search','General','License','Access')

If I SELECT, and ORDER BY Type, it is ordering in the order defined by the
the ENUM, not _alphabetically_ as a sane person would expect. UGH!

Please tell me there is a fix or work around.


 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Slave Read Only

2005-06-06 Thread Eric Bergen
There is an option starting in 4.0.14 called read-only that will allow 
only the slave i/o thread and users with the super priv to execute write 
queries.



Jeremiah Gowdy wrote:


Is there a way to make MySQL Slaves read only in regards to all UPDATE, INSERT, 
DELETE, DROP, TRUNCATE, etc, except for the replication thread?  I know you can 
just change the user accounts to disallow write access to the tables, but I'm 
thinking it would be convenient to simply have a switch that restricts all 
non-replicated writes to ensure that the slave stays consistent with the 
master.  I've had a few users with administrator accounts mistakenly write to 
slaves, screwing up replication.

Anyone else think this might be useful?  If so, I might just implement it if it 
isn't implemented already.

Jeremiah Gowdy
Senior Software Engineer
FreedomVOICE Systems
http://www.freedomvoice.com
 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum size of mysql server-id field

2005-05-27 Thread Eric Bergen
It's 32 bit. I'm not sure if it's signed or not but chances are you 
don't need more than 2 billion ids :)


Richard Lynch wrote:


On Wed, May 25, 2005 11:59 pm, David said:
 


Does anyone know what the range for server-id can be?

Is it a 16 bit number? 32 bit?

I can't seem to find it in the documentation or via google.
   



I did dogpile.com for my.cnf format server-id and found:

http://mysqld.active-venture.com/Replication_Options.html

which claims it's a 32-bit non-negative.

range from 1 to 2^32 - 1

I dunno if they are authoritative or not, mind you...

32-bit that would have been my first guess, fwiw...

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 1030 (HY000): Got error 127 from storage engine

2005-05-27 Thread Eric Bergen

Did you shut down mysqld or read lock before copying the table?

Qunfeng wrote:


Hi,

I am running MySQL 4.1.8-Max-log on two Linux machine (machine A and 
machine B).  In the past, I do a simple scp to transfer all the 
database table file from one machine (A) to the other (B). After the 
transferring, the table on B works well.


However, I am encountering a problem with transferring a big table 
(with 11 million record). The table in A works but seems to be 
corrupted after transferrring to B. When querying on the table at B, I 
got Got error 127 from storage engine.


I tried myisamchk -r tablename but I still got the same msg. Any idea? 
Thanks in advance!


Qunfeng








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL preg_split functionality?

2005-05-27 Thread Eric Bergen

I'm working on a set of UDFs for preg functions.


[EMAIL PROTECTED] wrote:

I have a hard time figuring out when you would use such a function. I do 
not believe you will be able to duplicate this behavior without 
constructing your own UDF or by writing a stored procedure. BTW, why *do* 
you want this function?


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Scott Klarenbach [EMAIL PROTECTED] wrote on 05/27/2005 01:30:35 
PM:


 


I'm trying to replicate this PHP behavior in a MySQL stored procedure.
The purpose is to pad every character of the string with a pad
character.  For example, if the pad character is 'x' and the string is
'STRING', the result is 'xSxTxRxIxNxGx'.

Here is the PHP code if it helps.  I'd like to use a regular
expression to replace, but I guess I could loop through the string
char by char and build a new one, it's just less elegant.  Thanks in
advance.

PHP:
-
$regPattern =  implode('x', preg_split('//', STRING, -1,
PREG_SPLIT_NO_EMPTY));

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

   



 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 1030 (HY000): Got error 127 from storage engine

2005-05-27 Thread Eric Bergen
If you are copying a file that mysql is writing to sections of the file 
are going to be from different points in time so when that all gets put 
backtogether in one file on the other end it's going to be corrupt.


Qunfeng wrote:

No, I didn't shut down mysqld or read lock. Would you please elaborate 
it a little more on why? Thanks a lot!


Qunfeng

At 01:05 PM 5/27/2005, Eric Bergen wrote:


Did you shut down mysqld or read lock before copying the table?

Qunfeng wrote:


Hi,

I am running MySQL 4.1.8-Max-log on two Linux machine (machine A and 
machine B).  In the past, I do a simple scp to transfer all the 
database table file from one machine (A) to the other (B). After the 
transferring, the table on B works well.


However, I am encountering a problem with transferring a big table 
(with 11 million record). The table in A works but seems to be 
corrupted after transferrring to B. When querying on the table at B, 
I got Got error 127 from storage engine.


I tried myisamchk -r tablename but I still got the same msg. Any 
idea? Thanks in advance!


Qunfeng










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slow DELETE query

2005-05-20 Thread Eric Bergen
Also try running the delete more frequently. It won't cause much of a 
lock if there is an index on accountcode and mysql doesn't find any rows 
to delete.

gerald_clark wrote:
Matthew Simpson wrote:
I use Mysql to store call detail records from telephone calls.  I 
have around 20 voice switches that send the call detail records in 
real time using INSERT statements.

I am having a problem where I need to delete junk call records that 
get generated [old call records, call records with no accountcode, 
etc.], but when I try to run the DELETE query, SQL grinds to a halt 
which causes my voice switches to halt because they can't run the 
INSERT queries.  Is this because of table locking?  An example delete 
query:

DELETE from cdr WHERE accountcode=''
Is there a way to make the DELETE query run at a lower priority and 
allow the INSERTs?

DELETE from cdr WHERE accountcode='' LIMIT 10
wait, and run again until there is nothing more to delete.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Upgrading from 4.0 to 4.1

2005-05-19 Thread Eric Bergen
When mysql had trouble starting up what error messages did it give in 
the .err file in the datadir?

-Eric
James Tu wrote:
I had to migrate from 4.0.11 to 4.1.12. I saved the data directory from the 
older version. I also exported all the information using phpMyAdmin's export 
tool. I have one huge .sql file.

When I upgraded to 4.1.12, I followed instructions to copy the contents of 
the old data directory to the new one. This didn't work...MySQL had trouble 
starting up.
So, I restored the new data folder, and ran the .sql file to recreate all 
the table and to insert all the new records. This works fine.

What is the easiest way to upgrade and migrate the data? Did I do something 
wrong by just copying the contents of the data directory? Should that method 
work?

-James
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: select count(*) table

2005-05-16 Thread Eric Bergen
select count(*) table; gives me a syntax error in 4.0 and 4.1 so I'm not 
sure how that worked. Something similar would be select count(*) t; 
which uses the shortcut alias syntax. It's the same as doing select 
count(*) as t;

Simon Garner wrote:
[EMAIL PROTECTED] wrote:
I have a curious situation I was hoping someone could shed some light 
on.

mysql select count(*) table;
+---+
| table |
+---+
| 0 |
+---+
1 row in set (0.00 sec)
mysql select count(*) from table;
+--+
| count(*) |
+--+
|25965 |
+--+
1 row in set (0.00 sec)
I found it curious that the first query didn't return an error with 
there being no 'from', and even more curious that it returned a 0.  
Is the first query actually legit, and if so, what does the 0 mean?

Yes, you can do a select without a table. This allows you to get the 
values of expressions or functions.

E.g.
SELECT 1+1
will return 2, and
SELECT NOW()
will return the current date and time.
Your query is selecting COUNT(*) AS table rather than COUNT(*) FROM 
table. Naturally, without a table, COUNT(*) will return 0.

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysqldump question

2005-05-13 Thread Eric Bergen
Is there a reason you can't run mysqldump on the server? You could then 
gzip it and use any transport method with throttling you want (like wget)

-Eric
Aaron Wohl wrote:
http://www.birdsoft.demon.co.uk/proglib/slowpipe.htm would seem to do
what you want... I havent tried it yet, but noted the URL for the next
time I needed that functionality.
- Original message -
From: Amit M Bhosle [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Date: Fri, 13 May 2005 09:18:00 +0530
Subject: mysqldump question
Hi:
i was wondering if there's any way to limit the bandwidth used by
mysqldump to dump data from remote hosts. since i couldn't find any
documentation on this, i assume that mysqldump will use all the
available bandwidth of the network.
the issue is that i'm looking to fetch data to the tune of 100s of
MBs, and i don't want the mysqldump to hog all the bandwidth, thus
adversely affecting other communication.
thx in advance for ur time.
AB
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Read past Equivalent in MySQL

2005-05-13 Thread Eric Bergen
I agree. It sounds like you could use plain repeatable read isolation 
transactions.  If someone else is modifying those rows you get an older 
version from when your transaction was started. No need for skipping 
anything.

Martijn Tonies wrote:
I am using InnoDB only.
But, it's not skipping locked rows.
 

Ditto that here.
   

Actually, I consider that a good thing... What's the point in leaving
out rows that have not been modified yet but are about to be updated?
The transaction that has the rows locked might as well be rolled back.
The data that is visible is the data as available at that moment.
With regards,
Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle  MS SQL
Server
Upscene Productions
http://www.upscene.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Seeking Syntax Assistance

2005-05-13 Thread Eric Bergen
Sorry, mysql doesn't support sub queries until version 4.1
-Eric
Scott Purcell wrote:
Hello,
I am accustomed to Oracle query syntax, and I am having trouble with this 
following query which uses the IN. I am running mysql ( Ver 12.21 Distrib 
4.0.15, for Win95/Win98 (i32) ) which does not support the IN.
How do I rewrite the below query to function?
Basically what I have is three tables (see below)
I want to only show the category (name and id) where there is a quantity of 
greater than 0 in the item table where the item.id is referenced in the 
item_cat_rel.id and the item_cat_rel.cat_id = category_id.
QUERY__
select c.cat_id, c.name, crel.id, crel.cat_id
from category c, item_cat_rel crel
where c.parent_id = 0 and c.visible = 'Y'
and c.id = crel.cat_id 
and crel.id IN (select id from item where quantity  1)
order by c.sort

I tried replacing IN with = but it does not work.
Any help would be appreciated.
Thanks,
Scott


mysql describe category;
+---+--+--+-+-++
| Field | Type | Null | Key | Default | Extra  |
+---+--+--+-+-++
| cat_id| int(11)  |  | PRI | NULL| auto_increment |
| parent_id | int(11)  | YES  | | NULL||
| visible   | char(1)  |  | | Y   ||
| sort  | int(11)  | YES  | | NULL||
| name  | varchar(200) | YES  | | NULL||
+---+--+--+-+-++
5 rows in set (0.03 sec)
mysql
mysql describe item_cat_rel
   - ;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| id | int(11) | YES  | MUL | NULL|   |
| cat_id | int(11) |  | | 0   |   |
++-+--+-+-+---+
2 rows in set (0.01 sec)
mysql describe item;
+-+--+--+-+-++
| Field   | Type | Null | Key | Default | Extra  |
+-+--+--+-+-++
| id  | int(11)  |  | PRI | NULL| auto_increment |
| manufacturer_id | varchar(50)  | YES  | | NULL||
| name| varchar(255) | YES  | | NULL||
| description | varchar(255) | YES  | | NULL||
| short_desc  | varchar(255) | YES  | | NULL||
| asset_id| varchar(14)  | YES  | | NULL||
| dimensions  | varchar(50)  | YES  | | NULL||
| pounds  | int(11)  | YES  | | NULL||
| price   | decimal(9,2) | YES  | | NULL||
| sale_price  | decimal(9,2) | YES  | | NULL||
| quantity| int(11)  |  | | 0   ||
| featured| char(1)  |  | | N   ||
| seasonal| char(1)  |  | | N   ||
+-+--+--+-+-++
13 rows in set (0.02 sec)
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Efficiently finding a random record

2005-05-13 Thread Eric Bergen
Even better is if you have an integer primary key (think auto_increment) 
and use in()

So if you want 10 random rows in your app generate 20 or so random 
numbers and do something like
select col1, from t where x in (1, 5, 3, 2...) limit 10

check num rows and if you don't get enough generate more random numbers 
and try again.

in() is blazing fast even with thousands of numbers so don't be afraid 
to kick a few extra in.

-Eric
Philip Hallstrom wrote:
I have a db of about 300,000 records and when I try to find one 
random record like this:

select * from table order by rand() limit 1;
it can take several minutes. My Sherlock Holmes instincts tell me 
that what I'm doing is somehow inefficient. What is the primary 
culprit here?

The culprit is that mysql has to assign a random value (via rand()) to 
each of the 300,000 rows, then order all 300,000 rows by that random 
value and return the first one.

So even though you're just retrieving one value, you're processing 
300,000 rows.

You'd be better off doing something like this in your application..
row_count = get row count;
random_row = get random number from 0 to row_count - 1;
result = db query LIMIT 1 OFFSET random_row
or something like that...
-philip

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Fastest way to get the record count?

2005-05-13 Thread Eric Bergen
select count(*) from table is the fastest for myisam because it caches 
row count. This will also include any rows that have null values.

select count(col) from table will not count rows where that column is 
null and will also be forced to access the index or data file.

-Eric
Brian Dunning wrote:
I've been doing the following to get the record count from a db. Is  
this the fastest?

select count(id) as `count` from tablename;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Efficiently finding a random record

2005-05-13 Thread Eric Bergen
in() can take millions of arguments. Up to max packet size. Try it :)
Dan Bolser wrote:
On Fri, 13 May 2005, Eric Bergen wrote:
 

Even better is if you have an integer primary key (think auto_increment) 
and use in()

So if you want 10 random rows in your app generate 20 or so random 
numbers and do something like
select col1, from t where x in (1, 5, 3, 2...) limit 10

check num rows and if you don't get enough generate more random numbers 
and try again.

in() is blazing fast even with thousands of numbers so don't be afraid 
to kick a few extra in.
   

I heard about a 255 'in' limit. When you say 'thousands of numbers' do you
mean in the IN or in the column?
 

-Eric
Philip Hallstrom wrote:
   

I have a db of about 300,000 records and when I try to find one 
random record like this:

select * from table order by rand() limit 1;
it can take several minutes. My Sherlock Holmes instincts tell me 
that what I'm doing is somehow inefficient. What is the primary 
culprit here?
   

The culprit is that mysql has to assign a random value (via rand()) to 
each of the 300,000 rows, then order all 300,000 rows by that random 
value and return the first one.

So even though you're just retrieving one value, you're processing 
300,000 rows.

You'd be better off doing something like this in your application..
row_count = get row count;
random_row = get random number from 0 to row_count - 1;
result = db query LIMIT 1 OFFSET random_row
or something like that...
-philip
 

   


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Maximum number of user variables

2005-05-12 Thread Eric Bergen
It sounds like you should be doing the link preservation and number 
update part in php or perl.


Neculai Macarie wrote:
Not that I'm aware of. What type of conversions are you doing that you
need 30,000 use vars? An easy solution would be to try it and find out
   

:)
 

I need to move multiple linked entries (in around 12 tables) from one
running server to another. I'm using auto_increment's all over the place
and
I have the following problem: I need to move (no replication possible)
 

the
 

information in those 12 tables to an identical functioning system (same
software, same db, but other server) and I need to preserve the
 

relations
 

between the tables. The problem that I have is that the systems evolve
independently and I can have the same id for 2 different entries on the
system (e.g. on the first system I can have the id 10 for User1, but on
second system id 10 would be for another user).
 

Perhaps after you dump the structure and data into sql files,
you could remove temporarily the extra attribut auto increment to those
columns.
Then start inserting.
After that add the auto increment attribut again.
OR
use bulkcopy(?) if im not mistaken, or any other methods to copy as it is
   

to
 

new dbservers
(compress it first).
   

Donny,
Thanks for the answer. Dumping the structure and data alone (mysqldump
style) is not helping me because the tables are linked and I must update the
linked tables with the new id's.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Very UIrgent....How can i start with mysql in c language?

2005-05-12 Thread Eric Bergen
Start reading here: http://dev.mysql.com/doc/mysql/en/c.html

On 5/12/05, Ashok Kumar [EMAIL PROTECTED] wrote:
 Hi friends,
I'm ashok, new member to this forum. I'm doing my final year graduation 
 and I'm new to this MySQL, C and Windows Combination(i never worked DB 
 connectivity in C). There is no such header files in TC such as mysql.h and 
 so on. how can i include those files and how can i create a db via queries 
 from 'C'(is it possible to create a db manually?). Pls give me sugestions in 
 some what detailed manner.
 
 Thank u and Regards,
 Ashok Kumar.P.S
 
 
 -
 Yahoo! Mail
  Stay connected, organized, and protected. Take the tour
 


-- 
Eric Bergen
[EMAIL PROTECTED]
http://www.ebergen.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Maximum number of user variables

2005-05-11 Thread Eric Bergen
Not that I'm aware of. What type of conversions are you doing that you 
need 30,000 use vars? An easy solution would be to try it and find out :)

-Eric
Neculai Macarie wrote:
Hi!
What is the maximum number of user variables that I can have in MySQL ? (I
need to transfer a application from one server to another and I need to
rebuild the links between tables with new id's on the new server (I'm using
auto_increment extensively), and I have a solution that involves many user
variables, around 30 000 in a single connection).
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Dates after version upgrade

2005-05-11 Thread Eric Bergen
The default timestamp display format changed between mysql 4.0 and 4.1
The easy fix is to add + 0 to all timestamp fields in selects. Like
this: select my_ts + 0 from t;
You should always read the upgrade notes in the manual. This is very well 
documented:
http://dev.mysql.com/doc/mysql/en/upgrading-from-4-0.html
Jeff Hinds wrote:
After I upgraded from 4.0.16 to 4.1.11 I have found that the date fields are
being pre-formatted and are causing problems to back-ground programs. Is
there a configuration or session setting that will force the date formatting
to the standard in 4.0 and earlier to correct the program issues?


Thanks,




http://www.mtsu.edu/~itd Information Technology Division


Jeff Hinds, ocp
Database Specialist 

Middle Tennessee State University
001 Cope Administration Building
Murfreesboro, TN 37132 

mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
tel: 
fax: 

(615) 898-5719
(615) 898-5720 



 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Union Performance Question

2005-05-11 Thread Eric Bergen
Use one big table. A merge table will run the same query over all 10 
tables. The key buffer is filled from the top down so if you have a key 
buffer that looks like this:
   a
  /  \
 /\
   b  c
  /  \/  \
de fg

Almost all queries for that index will be able to use the buffer for 'a' 
and 'b'. It's not until they get things very narrowed down that you have 
to actually hit the index files for specific leaves. Creating ten small 
tables creates ten duplicate 'a' and 'b' sections which polutes the key 
buffer.

-Eric
Dathan Pattishall wrote:
Use 10 smaller tables and perform a union. It's faster to look up in
smaller tables then larger ones-generally. Additionally more of the
key_buffer can be used for tables with the most hits over the tables
with less hits, making the lookup sustain speed over time.

 

-Original Message-
From: Dan Salzer [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 11, 2005 11:46 AM
To: mysql@lists.mysql.com
Subject: Union Performance Question

Hi everyone,
I have a question regarding the performance of UNION queries:
I need to do a full-text search against a large number of 
rows. Is it faster to have one table with 10,000,000 text 
rows and perform one full-text search. Or, am I better off 
having 10 smaller more managable tables and performing 
UNION'ed full-text searches against them.

A merge table seems like it would be the ideal solution, but 
the limitation on full-text indexing prohibits it.

Thanks!
-Dan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

   

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: missing library

2005-05-10 Thread Eric Bergen
you need to install your distro's ncurses and ncurses-devel packages.
-Eric
ganesan malairaja wrote:
ok i found the source file ..
tarball
mysql-4.1.11.tar.gz
after executing the ./configure command
i does some checking the it gives this error
checking for termcap functions library ... configure : error  :no 
curses / termcap library found

htmlDIV
DIVFONT color=#cc face=Lucida Handwriting, 
CursiveEMSTRONGIMG height=16 
src=http://graphics.hotmail.com/emarrow_right.gif; 
width=16Ganesan_MalairajaIMG height=16 
src=http://graphics.hotmail.com/emarrow_left.gif; 
width=16/STRONG/EM/FONT/DIV/DIV/html



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


  1   2   >