Re: See lock table

2006-07-07 Thread Barry

Gabriel Mahiques schrieb:
Friend, I need to see if a table is locked by some application or some 
user. Do you know some tools for this? (gpl license better). Or some 
sentence?

When a table is locked, how can i unlock this table?
My problem is that some applications cause an error and the user closes 
it with the task manager then the table remains locked.

Regards


InnoDB and edit the MyCnf with your preferred values.

Greetings
Barry

--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: space usage

2006-07-07 Thread Barry

Martin Jespersen schrieb:
Does anyone havea  clue of how mysql optimizes empty fields and how 
query speed is affected?


Why don't you read the part in the mysql documentation about the 
opimization?


what will be better for queryspeed/size: adding them with NULL using 
NULL as default or with NOT NULL using 0 and '' as defaults?


Depends on what you need!
But Both is okay.
NULL woudl just give you more free space since NULL don't add any 
bytes to the column.


Greetings
Barry

--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: remote monitoring of mySQL

2006-07-07 Thread Vittorio Zuccalà
Andy Ford ha scritto:
 Can I do it with DBD::Perl
   

DBD::Mysql is a good module and you can control mysql status...
You can do a lot of select and you can send them functions as createdb,
shutdown or reload.
I use this module in my mysql databases (i've only one server not many
as you have...) and i'm very
happy about it.




-- 
vittorio zuccalà
Finconsumo Banca SPA
[EMAIL PROTECTED]
Tel: 011-6319464



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Date functions

2006-07-07 Thread Chris W
It's late and I just gave up reading the manual.  Can someone please 
tell me the easiest way to do a query that will return all rows with a 
time stamp that is X number of seconds older than the current time?  
Something like this.


SELECT * FROM t
WHERE TimeCol  (now() - 60*60*24*3)

Yes I know that is just 3 days but other times I will want to find 
records that are a few hours old so I like using the formula.


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  
give the gifts they want
One stop wish list for any gift, 
from anywhere, for any occasion!

http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.0.22 and show columns bug?

2006-07-07 Thread SciBit MySQL Team

While you are not wrong, James, is the length member suppose to denote the 
maximum length of data contained in result's specified column.  NOTE: The 
result's.  I.e. why give such an arbitrary number of bytes/length when no 
ENUM's or SET's are even in the result.  The point being, even if you create a 
table containing 10 INT columns, the result of show columns from should show 
the Type column length of 3 with a maximum data allocation for the 10 rows of 
30 bytes, and not ~2MB, as is currently the case.

And even, in a worse case, MySQL Dev decided to give the length back as the 
maximum potential length, who determined 196605 should be the magic number? 
An ENUM can have 64K values, each of which can be a text value/label of at 
least 64 characters, thus a magic number should be at least megs in size to 
play it safe. Thus, such an approach is simply put, stupid.

Ideally, as was the case in previous versions of MySQL, the Type column's 
Length should be given in context of the result, i.e. if there is an ENUM in 
the column list and it has the longest type description, the Type column's 
length should reflect its contained data size.

Kind Regards
SciBit MySQL Team
http://www.scibit.com

 
 -Original Message-
 From: James Harvard [EMAIL PROTECTED]
 To: SciBit MySQL Team [EMAIL PROTECTED]
 CC: mysql@lists.mysql.com mysql@lists.mysql.com
 Subject: [Spam-Junk]Re: MySQL 5.0.22 and show columns bug?
 Sent: Thu, 06 Jul 2006 13:50:33 GMT
 Received: Thu, 06 Jul 2006 13:50:29 GMT
 Read: Sat, 30 Dec 1899 00:00:00 GMT
 Although I know nothing about C I imagine this is because the 'type' column 
 can contain all the possible values from an ENUM or SET field.
 James Harvard
 
 At 10:30 am + 6/7/06, SciBit MySQL Team wrote:
 Since a couple of recent stable versions back (and more recently, MySQL 
 5.0.22), MySQL has been returning the column length (C API) of the 'Type' 
 column of a show columns from.. statement as being 196605 (almost 192KB), 
 when this column only really contains data in the region of 10 bytes
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PBXT version 0.9.5 has been released

2006-07-07 Thread Paul McCullagh

Hi DÆVID,

Thanks for your feedback. What version of IE are you using?

You got a point about the style. I'll look into it...

Please reply to me directly, or write to [EMAIL PROTECTED]

Thanks,

Paul

On Jul 6, 2006, at 9:17 PM, Daevid Vincent wrote:

Your site has a bunch of JS errors (using IE) so I can't roll over ANY 
of
the menus (left or upper right). I also cannot write to 'contact' 
because of

this same error. Hence I send it here... To the list... *sigh*

Can I also suggest not using a dark red hyperlink with black text. I 
didn't

even realize those were links until I randomly rolled over one and it
highlighted bright red.

DÆVID



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 5.0.22 and show columns bug?

2006-07-07 Thread James Harvard
OK, fair enough. In that case I would think that filing a report on 
bugs.mysql.com would be your best way forward.

At 8:32 am + 7/7/06, SciBit MySQL Team wrote:
While you are not wrong, James, is the length member suppose to denote the 
maximum length of data contained in result's specified column.  NOTE: The 
result's.  I.e. why give such an arbitrary number of bytes/length when no 
ENUM's or SET's are even in the result.  The point being, even if you create a 
table containing 10 INT columns, the result of show columns from should show 
the Type column length of 3 with a maximum data allocation for the 10 rows of 
30 bytes, and not ~2MB, as is currently the case.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Specified key was too long; max key length is 1000 bytes (UNIQUE KEY on multiple columns)

2006-07-07 Thread Lubomir Host 'rajo'
Description:
Migration problem from 4.0.22 to 5.0.x. I can't create following table on 5.0.x 
version of mysql. Problem does't apper on 4.0.x version:

CREATE TABLE `PHONESlog_uniq` (
  `user_agent` varchar(80) default NULL,
  `http_x_wap_profile` varchar(255) default NULL,
  `pid` smallint(5) unsigned NOT NULL default '0',
  UNIQUE KEY `uniq_phone_key` (`user_agent`,`http_x_wap_profile`,`pid`)
) TYPE=MyISAM;

How-To-Repeat:

server 1:

mysql SELECT VERSION();
++
| VERSION()  |
++
| 4.0.22-log |
++
1 row in set (0.00 sec)

mysql CREATE TABLE `PHONESlog_uniq` (   `user_agent` varchar(80) default NULL, 
  `http_x_wap_profile` varchar(255) default NULL,   `pid` smallint(5) unsigned 
NOT NULL default '0',   UNIQUE KEY `uniq_phone_key` 
(`user_agent`,`http_x_wap_profile`,`pid`) ) TYPE=MyISAM;
Query OK, 0 rows affected (0.06 sec)

server 2:

mysql SELECT VERSION();
+---+
| VERSION() |
+---+
| 5.0.18|
+---+
1 row in set (0.00 sec)

mysql CREATE TABLE `PHONESlog_uniq` (   `user_agent` varchar(80) default NULL, 
  `http_x_wap_profile` varchar(255) default NULL,   `pid` smallint(5) unsigned 
NOT NULL default '0',   UNIQUE KEY `uniq_phone_key` 
(`user_agent`,`http_x_wap_profile`,`pid`) ) TYPE=MyISAM;
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes


Fix:

Submitter-Id:  submitter ID
Originator:Lubomir Host
Organization:
  Lubomir Host 'rajo' rajo AT platon.sk   ICQ #:  257322664   ,''`.
  Platon Group  http://platon.sk/  : :' :
  Homepage: http://rajo.platon.sk/ `. `'
  http://www.gnu.org/philosophy/no-word-attachments.html `-

MySQL support: extended email support
Synopsis:  Migration problem from 4.0.22 to 5.0.x. I can't create 
following table on 5.0.x version of mysql. Problem does't apper on 4.0.x 
version
Severity:  serious
Priority:  high
Category:  mysql
Class: sw-bug
Release:   mysql-5.0.22-Debian_3 (Debian Etch distribution)

C compiler:gcc (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-4)
C++ compiler:  g++ (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-4)
Environment:
Debian GNU/Linux or  FreeBSD, all versions of MySQL 5.0.x
System: Linux Idea 2.6.15-1-686 #2 Mon Mar 6 15:27:08 UTC 2006 i686 GNU/Linux
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gcc /usr/bin/cc
GCC: Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v 
--enable-languages=c,c++,java,f95,objc,ada,treelang --prefix=/usr 
--enable-shared --with-system-zlib --libexecdir=/usr/lib 
--without-included-gettext --enable-threads=posix --enable-nls 
--program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-java-awt=gtk-default --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-4.0-1.4.2.0/jre --enable-mpfr 
--disable-werror --with-tune=i686 --enable-checking=release i486-linux-gnu
Thread model: posix
gcc version 4.0.4 20060507 (prerelease) (Debian 4.0.3-3)
Compilation info: CC='gcc'  CFLAGS='-DBIG_JOINS=1 -O2'  CXX='g++'  
CXXFLAGS='-DBIG_JOINS=1 -felide-constructors -fno-rtti -O2'  LDFLAGS=''  
ASFLAGS=''
LIBC: 
lrwxrwxrwx 1 root root 13 Jun 28 23:32 /lib/libc.so.6 - libc-2.3.6.so
-rwxr-xr-x 1 root root 1177116 May 31 08:59 /lib/libc-2.3.6.so
-rw-r--r-- 1 root root 2628734 Jun  8 09:25 /usr/lib/libc.a
-rwxr-xr-x 1 root root 204 Jun  8 09:07 /usr/lib/libc.so
lrwxrwxrwx 1 root root 19 Jun 28 22:04 /usr/lib/libc-client.a - 
/usr/lib/c-client.a
lrwxrwxrwx 1 root root 28 Jun 28 22:05 /usr/lib/libc-client.so.2002edebian - 
libc-client.so.2002edebian.1
-rw-r--r-- 1 root root 772872 Jan 16 21:34 /usr/lib/libc-client.so.2002edebian.1
Configure command: ./configure '--build=i486-linux-gnu' '--host=i486-linux-gnu' 
'--prefix=/usr' '--exec-prefix=/usr' '--libexecdir=/usr/sbin' 
'--datadir=/usr/share' '--sysconfdir=/etc/mysql' 
'--localstatedir=/var/lib/mysql' '--includedir=/usr/include' 
'--infodir=/usr/share/info' '--mandir=/usr/share/man' 
'--with-server-suffix=-Debian_3' '--with-comment=Debian Etch distribution' 
'--enable-shared' '--enable-static' '--enable-thread-safe-client' 
'--enable-assembler' '--enable-local-infile' '--with-big-tables' '--with-raid' 
'--with-unix-socket-path=/var/run/mysqld/mysqld.sock' 
'--with-mysqld-user=mysql' '--with-libwrap' '--with-vio' '--without-openssl' 
'--without-docs' '--without-bench' '--without-readline' 
'--with-extra-charsets=all' '--with-innodb' '--with-isam' 
'--with-archive-storage-engine' '--with-csv-storage-engine' 
'--with-federated-storage-engine' '--without-embedded-server' 
'--with-ndbcluster' '--with-ndb-shm' '--without-ndb-sci' '--without-ndb-test' 
'--with-embedded-server' '--with-embedded-privilege-control' '--with-ndb-docs' 
'CC=gcc' 'CFLAGS=-DBIG_JOINS=1 -O2' 'CXXFLAGS=-DBIG_JOINS=1 
-felide-constructors -fno-rtti -O2' 'CXX=g++' 'build_alias=i486-linux-gnu' 
'host_alias=i486-linux-gnu'


-- 

RE: Date functions

2006-07-07 Thread Addison, Mark
From: Chris W  Sent: 07 July 2006 09:23
 
 It's late and I just gave up reading the manual.  Can someone please 
 tell me the easiest way to do a query that will return all 
 rows with a 
 time stamp that is X number of seconds older than the current time?  
 Something like this.
 
 SELECT * FROM t
 WHERE TimeCol  (now() - 60*60*24*3)
 
 Yes I know that is just 3 days but other times I will want to find 
 records that are a few hours old so I like using the formula.

SELECT * FROM t
WHERE TimeCol  DATE_SUB(CURDATE(), INTERVAL 60*60*24*3 SECOND);

http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html

mark
--
 





MARK ADDISON
WEB DEVELOPER

200 GRAY'S INN ROAD
LONDON
WC1X 8XZ
UNITED KINGDOM
T +44 (0)20 7430 4678
F 
E [EMAIL PROTECTED]
WWW.ITN.CO.UK
Please Note:

 

Any views or opinions are solely those of the author and do not necessarily 
represent 
those of Independent Television News Limited unless specifically stated. 
This email and any files attached are confidential and intended solely for the 
use of the individual
or entity to which they are addressed. 
If you have received this email in error, please notify [EMAIL PROTECTED] 

Please note that to ensure regulatory compliance and for the protection of our 
clients and business,
we may monitor and read messages sent to and from our systems.

Thank You.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specified key was too long; max key length is 1000 bytes (UNIQUE KEY on multiple columns)

2006-07-07 Thread Remo Tex

Lubomir Host 'rajo' wrote:

Description:

Migration problem from 4.0.22 to 5.0.x. I can't create following table on 5.0.x 
version of mysql. Problem does't apper on 4.0.x version:

CREATE TABLE `PHONESlog_uniq` (
  `user_agent` varchar(80) default NULL,
  `http_x_wap_profile` varchar(255) default NULL,
  `pid` smallint(5) unsigned NOT NULL default '0',
  UNIQUE KEY `uniq_phone_key` (`user_agent`,`http_x_wap_profile`,`pid`)
) TYPE=MyISAM;


How-To-Repeat:


server 1:

mysql SELECT VERSION();
++
| VERSION()  |
++
| 4.0.22-log |
++
1 row in set (0.00 sec)

mysql CREATE TABLE `PHONESlog_uniq` (   `user_agent` varchar(80) default NULL, 
  `http_x_wap_profile` varchar(255) default NULL,   `pid` smallint(5) unsigned NOT 
NULL default '0',   UNIQUE KEY `uniq_phone_key` 
(`user_agent`,`http_x_wap_profile`,`pid`) ) TYPE=MyISAM;
Query OK, 0 rows affected (0.06 sec)

server 2:

mysql SELECT VERSION();
+---+
| VERSION() |
+---+
| 5.0.18|
+---+
1 row in set (0.00 sec)

mysql CREATE TABLE `PHONESlog_uniq` (   `user_agent` varchar(80) default NULL, 
  `http_x_wap_profile` varchar(255) default NULL,   `pid` smallint(5) unsigned NOT 
NULL default '0',   UNIQUE KEY `uniq_phone_key` 
(`user_agent`,`http_x_wap_profile`,`pid`) ) TYPE=MyISAM;
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes



Fix:



Submitter-Id:   submitter ID
Originator: Lubomir Host
Organization:

  Lubomir Host 'rajo' rajo AT platon.sk   ICQ #:  257322664   ,''`.
  Platon Group  http://platon.sk/  : :' :
  Homepage: http://rajo.platon.sk/ `. `'
  http://www.gnu.org/philosophy/no-word-attachments.html `-

MySQL support: extended email support
Synopsis:   Migration problem from 4.0.22 to 5.0.x. I can't create 
following table on 5.0.x version of mysql. Problem does't apper on 4.0.x version
Severity:   serious
Priority:   high
Category:   mysql
Class:  sw-bug
Release:mysql-5.0.22-Debian_3 (Debian Etch distribution)



C compiler:gcc (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-4)
C++ compiler:  g++ (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-4)
Environment:

Debian GNU/Linux or  FreeBSD, all versions of MySQL 5.0.x
System: Linux Idea 2.6.15-1-686 #2 Mon Mar 6 15:27:08 UTC 2006 i686 GNU/Linux
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gcc /usr/bin/cc
GCC: Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v 
--enable-languages=c,c++,java,f95,objc,ada,treelang --prefix=/usr 
--enable-shared --with-system-zlib --libexecdir=/usr/lib 
--without-included-gettext --enable-threads=posix --enable-nls 
--program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-java-awt=gtk-default --enable-gtk-cairo 
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-4.0-1.4.2.0/jre --enable-mpfr 
--disable-werror --with-tune=i686 --enable-checking=release i486-linux-gnu
Thread model: posix
gcc version 4.0.4 20060507 (prerelease) (Debian 4.0.3-3)
Compilation info: CC='gcc'  CFLAGS='-DBIG_JOINS=1 -O2'  CXX='g++'  
CXXFLAGS='-DBIG_JOINS=1 -felide-constructors -fno-rtti -O2'  LDFLAGS=''  
ASFLAGS=''
LIBC: 
lrwxrwxrwx 1 root root 13 Jun 28 23:32 /lib/libc.so.6 - libc-2.3.6.so

-rwxr-xr-x 1 root root 1177116 May 31 08:59 /lib/libc-2.3.6.so
-rw-r--r-- 1 root root 2628734 Jun  8 09:25 /usr/lib/libc.a
-rwxr-xr-x 1 root root 204 Jun  8 09:07 /usr/lib/libc.so
lrwxrwxrwx 1 root root 19 Jun 28 22:04 /usr/lib/libc-client.a - 
/usr/lib/c-client.a
lrwxrwxrwx 1 root root 28 Jun 28 22:05 /usr/lib/libc-client.so.2002edebian - 
libc-client.so.2002edebian.1
-rw-r--r-- 1 root root 772872 Jan 16 21:34 /usr/lib/libc-client.so.2002edebian.1
Configure command: ./configure '--build=i486-linux-gnu' '--host=i486-linux-gnu' 
'--prefix=/usr' '--exec-prefix=/usr' '--libexecdir=/usr/sbin' 
'--datadir=/usr/share' '--sysconfdir=/etc/mysql' 
'--localstatedir=/var/lib/mysql' '--includedir=/usr/include' 
'--infodir=/usr/share/info' '--mandir=/usr/share/man' 
'--with-server-suffix=-Debian_3' '--with-comment=Debian Etch distribution' 
'--enable-shared' '--enable-static' '--enable-thread-safe-client' 
'--enable-assembler' '--enable-local-infile' '--with-big-tables' '--with-raid' 
'--with-unix-socket-path=/var/run/mysqld/mysqld.sock' 
'--with-mysqld-user=mysql' '--with-libwrap' '--with-vio' '--without-openssl' 
'--without-docs' '--without-bench' '--without-readline' 
'--with-extra-charsets=all' '--with-innodb' '--with-isam' 
'--with-archive-storage-engine' '--with-csv-storage-engine' 
'--with-federated-storage-engine' '--without-embedded-server' 
'--with-ndbcluster' '--with-ndb-shm' '--without-ndb-sci' '--without-ndb-test' 
'--with-embedded

-server' '--with-embedded-privilege-control' '--with-ndb-docs' 'CC=gcc' 
'CFLAGS=-DBIG_JOINS=1 -O2' 'CXXFLAGS=-DBIG_JOINS=1 -felide-constructors 
-fno-rtti -O2' 'CXX=g++' 

Re: Date functions

2006-07-07 Thread Dan Buettner

Try this:

SELECT * FROM t
where TimeCol  date_sub( now(), INTERVAL x SECOND )

Dan


On 7/7/06, Chris W [EMAIL PROTECTED] wrote:

It's late and I just gave up reading the manual.  Can someone please
tell me the easiest way to do a query that will return all rows with a
time stamp that is X number of seconds older than the current time?
Something like this.

SELECT * FROM t
WHERE TimeCol  (now() - 60*60*24*3)

Yes I know that is just 3 days but other times I will want to find
records that are a few hours old so I like using the formula.

--
Chris W
KE5GIX


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date functions

2006-07-07 Thread Brent Baisley

The INTERVAL command is what you are looking for. It doesn't have to be SECOND 
(with no S), you could use day, hour ,week, etc.

SELECT * FROM t WHERE TimeCol(now() - INTERVAL X SECOND)

http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html

- Original Message - 
From: Chris W [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Friday, July 07, 2006 4:23 AM
Subject: Date functions


It's late and I just gave up reading the manual.  Can someone please 
tell me the easiest way to do a query that will return all rows with a 
time stamp that is X number of seconds older than the current time?  
Something like this.


SELECT * FROM t
WHERE TimeCol  (now() - 60*60*24*3)

Yes I know that is just 3 days but other times I will want to find 
records that are a few hours old so I like using the formula.


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  
give the gifts they want
One stop wish list for any gift, 
from anywhere, for any occasion!

http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Access Denied in Query Browser

2006-07-07 Thread jonas

Hello,

I get an error message when I start Query Browser.

Could not fetch Catalogs/Schema data.
The following error occured: Access denied; you need the SHOW DATABASES 
priviliege for this operation.


My hosting provider has started their server with --skip-show-database.
Is it possible to get QB to list only my database?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Errors compiling MySQL 5.0.22 on IRIX 6.5.25

2006-07-07 Thread Penny Oots

Has anyone successfully compiled MySQL 5.0.22 from source on any IRIX
version?  We are running IRIX 6.5.25 and have tried cc and gcc compilers
with a variety of parameters.  All generate errors after an hour or
more.  If you have been successful, would you mind sharing the parameters
used and IRIX version?

Thanks for any help!
Penny


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Join in two different database

2006-07-07 Thread Vittorio Zuccalà
Hello,
mysql 4.1.11 on debian.

I've db1 with table1 and db2 with table2.
How can i make a join into table1 and table2?

If each table are into the same db i can write:
select a.campo1, b.campo2 FROM table1 a INNER JOIN table2 b ON
a.campo1=b.campo2;

But what about my problem?

Thank you very much!!


-- 
vittorio zuccalà
Finconsumo Banca SPA
[EMAIL PROTECTED]
Tel: 011-6319464



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Innodb import tuning on Sun T2000

2006-07-07 Thread Russell Horn
Folks,

I'm trying to import a sql dump of a database which is taking an age.
I've disabled foreign key constraints, unique checks and set autocommit
to 0 but this is still slow.

My data file has a number of tables, one of which has circa 3.5 million
tuples taking up about 500MB of data with 900MB of indexes. This seems
to be where we are slowing down. Most the other tables are much smaller.

The server is a Sun T2000 with 6 cores and 8GB of RAM. We're using the
local disks.

I'm using the mysql.conf file from
http://media.zilbo.com/img/feh/mysql/my.cnf though I've increased
innodb_buffer_pool_size to 3G

Is there anything else I can do to speed up these operations, or should
I resign myself to the import taking several hours each time it's
required?

TIA,

Russell


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: PHP connects in Latin1 when it should do it in UTF-8

2006-07-07 Thread Eric Butera

On 7/6/06, Santiago del Castillo [EMAIL PROTECTED] wrote:

Hi, i'm having a bit of a headache with PHP and MySQL, i've some questions:


1) I've a database in UTF-8 and when i connect to it with mysql_connect,
and exec a query with mysql_query, the results are in latin1. (i proved
this with mysql_query(show variables like 'char%');
2) Is there any way to force mysql to make connections in utf8?

here is the mysql status command report:



In my scripts I issue these commands on connect.  This seems to
resolve the problem for me.  I cannot change it in my.cnf since
changing the default from latin1 to utf8 would break previous sites.

SET NAMES 'utf8'
SET collation_connection = 'utf8_unicode_ci'

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Difference between essential and complete distributions

2006-07-07 Thread Rob Desbois
Is there somewhere I can find the exact differences between the contents of the 
'essential' and 'complete' Windows MySQL distributions?
I've tried the source code and searched all over the website but can't find it 
anywhere.

--Rob


__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



rand()

2006-07-07 Thread Jo�o C�ndido de Souza Neto
Hi everyone,

I´ve got a page where a ought to get 20 registers in ramdom order but i want 
to display it in an alphabetical order.

Someone knows if there is a way to get that 20 random registers in 
alphabetical order?

Thanks. 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: rand()

2006-07-07 Thread Jay Blanchard
[snip]
I´ve got a page where a ought to get 20 registers in ramdom order but i want 
to display it in an alphabetical order.

Someone knows if there is a way to get that 20 random registers in 
alphabetical order?
[/snip]

SORT BY `registers`

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Can't configure instance w/ 5.0.22 instance wizard

2006-07-07 Thread cnelson
No matter what I do, it fails at the step where it's supposed to install
and start the 'Windows service with an error 0.  Is this a known issue?
 It sure would be nice to get more information about the failure from
the wizard.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: rand()

2006-07-07 Thread Jo�o C�ndido de Souza Neto
Excuse me, but i don´t understand your answer.

Could you explain it?

Jay Blanchard [EMAIL PROTECTED] escreveu na mensagem 
news:[EMAIL PROTECTED]
[snip]
I´ve got a page where a ought to get 20 registers in ramdom order but i want
to display it in an alphabetical order.

Someone knows if there is a way to get that 20 random registers in
alphabetical order?
[/snip]

SORT BY `registers` 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Join in two different database

2006-07-07 Thread Dan Buettner

Vittorio - assuming the two databases exist within the same database
server, do this:

select a.campo1, b.campo2
FROM db1.table1 a INNER JOIN db2.table2 b ON a.campo1=b.campo2;

Dan


On 7/7/06, Vittorio Zuccalà [EMAIL PROTECTED] wrote:

Hello,
mysql 4.1.11 on debian.

I've db1 with table1 and db2 with table2.
How can i make a join into table1 and table2?

If each table are into the same db i can write:
select a.campo1, b.campo2 FROM table1 a INNER JOIN table2 b ON
a.campo1=b.campo2;

But what about my problem?

Thank you very much!!


--
vittorio zuccalà
Finconsumo Banca SPA
[EMAIL PROTECTED]
Tel: 011-6319464





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: rand()

2006-07-07 Thread Jay Blanchard
[snip]
Excuse me, but i don´t understand your answer.

Could you explain it?
[/snip]

Add it to the end of your query

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: rand()

2006-07-07 Thread Eugene Kosov

I think you meant ORDER BY `registers`..

Jay Blanchard пишет:

[snip]
I´ve got a page where a ought to get 20 registers in ramdom order but i want 
to display it in an alphabetical order.


Someone knows if there is a way to get that 20 random registers in 
alphabetical order?

[/snip]

SORT BY `registers`



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: rand()

2006-07-07 Thread Jo�o C�ndido de Souza Neto
Hi Jay, thanks a lot for your help.

I tried it and it don´t work, but i tried other way that works fine. I´ll 
put it here because it could help other people.

select * from ( select g.grade_id as grade_id, concat(p.nome, 
,g.grade_subtitulo) as nome from grade g inner join produto p on 
g.produto_id=p.id where (select count(*) from produtos_lista pl where 
pl.grade_id=g.grade_id)=0 order by rand() limit 20) as t1 order by t1.nome

Thanks again.

João Cândido de Souza Neto [EMAIL PROTECTED] escreveu na 
mensagem news:[EMAIL PROTECTED]
 Hi everyone,

 I´ve got a page where a ought to get 20 registers in ramdom order but i 
 want to display it in an alphabetical order.

 Someone knows if there is a way to get that 20 random registers in 
 alphabetical order?

 Thanks.
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: rand()

2006-07-07 Thread mos

At 11:25 AM 7/7/2006, you wrote:

I think you meant ORDER BY `registers`..


He may have meant that but then you lose the randomness.
I think it has to be done in 2 steps. I don't see any way of doing it 
without creating a temporary table.


The SQL might look something like this:

drop temporary table if exists tmpreg;
create temporary table tmpreg select registers from tablex order by rand() 
limit 10;

select * from temporary tmpreg order by registers;

Mike




Jay Blanchard пишет:

[snip]
I´ve got a page where a ought to get 20 registers in ramdom order but i 
want to display it in an alphabetical order.
Someone knows if there is a way to get that 20 random registers in 
alphabetical order?

[/snip]
SORT BY `registers`


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Rekall

2006-07-07 Thread Timothy Murphy
I've used the KDE program rekall to a small extent in the past,
in order to set up and populate MySQL tables on a remote machine.
It has recently been seg-faulting:

[EMAIL PROTECTED] ~]$ rekall
KBLocation::setCacheSize: set to 0MB
Segmentation fault

In any case, I didn't find it very easy to use.
I wonder if anyone can suggest an alternative -
a GUI Linux application
which allows data to be entered in an SQL table.
My preference would be for Python and KDE,
but those aren't essential.

-- 
Timothy Murphy  
e-mail (80k only): tim /at/ birdsnest.maths.tcd.ie
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqldump: Got errno 27 on write. file too large

2006-07-07 Thread Duhaime Johanne
Thank you to both of you.

Just a add-on, already mentionned to Michael: 
My file system allows illimited files size.
I have a file of 11G on my partition.
mercure{root}120: du -kh mercure.log.jui2006 
  11G   mercure.log.jui2006
The owner of this file is mysql (it is the log file).

Also the sysadmin was able to create files of big size with the mkfile command 
with Mysql account.


I have gone througt the change history looking for  the work mysqldump and I 
have not seen anythng related.

So I guest my best guess is to install Mysql 5.


Best  regards

-Message d'origine-
De : Greg 'groggy' Lehey [mailto:[EMAIL PROTECTED] 
Envoyé : Wednesday, 05 July 2006 22:04
À : Duhaime Johanne; Michael Stassen
Cc : mysql@lists.mysql.com; Dominik Klein
Objet : Re: mysqldump: Got errno 27 on write. file too large

On Wednesday,  5 July 2006 at  9:12:52 -0400, Duhaime Johanne wrote:
 I have musql 4.1.7 on Solaris 9, 64 bits and I want to mysqldump a +-4 
 gigas db.

 ...

 The full directory that contains the *.frm, *.MYD,*.MYI files has the 
 following size:
 du -ks  /seqdata/mysql/autres_bds/regen
 3702719 /seqdata/mysql/autres_bds/regen ... I get the output du -k: 
 2098184 myregendump

 this error supposed to be:
 bin/perror 27
 Error code  27:  File too large
 As you can see I have plenty of space.

Error codes below 128 come from the kernel.  It's possible for applications to 
return error numbers in this range too, but it's not a good idea, and mysqldump 
doesn't do it.  So whatever's happening here, it's being reported by the kernel.

There are two numbers:

#define EFBIG   27  /* File too large */
#define ENOSPC  28  /* No space left on device */

EFBIG refers to limitations in the size of one file; you can get it even if 
there's plenty of space in the file system.  ENOSPC is the other way round: you 
can get it even if the file isn't at its maximum allowed size.

 In  the error file I have multiple times the line:
 InnoDB: Error: unable to create temporary file; errno: 2
 mercure{mysql}66: bin/perror 2
 Error code   2:  No such file or directory
 But  the directory exist.

I'd guess that it doesn't.  Unfortunately the message doesn't tell you which 
file it's trying to create.  This might be worth a bug report, since it 
seriously hinders you in finding out what that particular problem is.

Why does this not show up in your verbose mysqldump?


 Then I tried a verbose mysqldump.

 mercure{mysql}73: /seqweb/mysql/bin/mysqldump --opt --verbose regen  
 /seqdata/mysql/myregendump
 -- Connecting to localhost...
 -- Retrieving table structure for table cpgisland_Human_May2004...
 -- Sending SELECT query...
 -- Retrieving rows...
 ...
  21 tables
 -- Retrieving table structure for table unit_occurence_Human_May2004...
 -- Sending SELECT query...
 -- Retrieving rows...
 /seqweb/mysql/bin/mysqldump: Got errno 27 on write This table is 1 
 giga data and 500mb index.

Note that mysqldump is not very efficient in its format.  How big was the 
output file when it failed?  I'd hazard a guess at 2 GB (specifically, 
2147483647 bytes).  If this is the case, it's definitely a file system 
limitation.

 Then I tried a mysqldump of this table only:
 /seqweb/mysql/bin/mysqldump --opt --verbose regen
 unit_occurence_Human_May2004
 and it works fine.

 How can I solve this problem?

Well, you've found one workaround :-) 

What file system are you using?  Could this be (Sun's old) UFS?

I'm sure that Sun has file systems that aren't limited to 2 GB; you could use 
one of them.  They'll probably give you other advantages too.

 I have looked at the previous message in the forum but could not find 
 anything answering my problem.

I'm relatively confident that this isn't a mysqldump problem.

On Wednesday,  5 July 2006 at 12:28:53 -0400, Michael Stassen wrote:

 My first thought is that Dominik is on the right track.  I get

 : perror 27
   OS error code  27:  File too large

 which suggests there is some OS limitation.  Perhaps the user running 
 mysqldump is limited?  Do you have any larger files owned by the same 
 user?  Can that user currently create a file larger than that using 
 another means?

Yes, this could be a disk quota issue.

 The other possibility would be a bug.  You are using version 4.1.7, 
 which is nearly 2 years old now (released October 2004).  The current 
 version is 4.1.20. If you have indeed hit a bug, your best bet would be to 
 upgrade
and try again.  You should probably at least read the *long* list 
 of bug fixes from 4.1.7 to 4.1.20 in the MySQL change history in the 
 manual http://dev.mysql.com/doc/refman/4.1/en/news-4-1-x.html.

While it's my duty not to stand up and say it's not a mysqldump bug, I'd be 
very surprised in this case--see above for reasoning.

Greg
--
Greg Lehey, Senior Software Engineer, Online Backup MySQL AB, 
http://www.mysql.com/ Echunga, South Australia
Phone: +61-8-8388-8286   Mobile: +61-418-838-708
VoIP:  sip:[EMAIL 

Re: Rekall

2006-07-07 Thread Daniel da Veiga

On 7/7/06, Timothy Murphy [EMAIL PROTECTED] wrote:

I've used the KDE program rekall to a small extent in the past,
in order to set up and populate MySQL tables on a remote machine.
It has recently been seg-faulting:

[EMAIL PROTECTED] ~]$ rekall
KBLocation::setCacheSize: set to 0MB
Segmentation fault

In any case, I didn't find it very easy to use.
I wonder if anyone can suggest an alternative -
a GUI Linux application
which allows data to be entered in an SQL table.
My preference would be for Python and KDE,
but those aren't essential.



Have you tried MySQL Administrator and Query Browser?

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rekall

2006-07-07 Thread Nicholas Vettese
If you are using an apt-based system, type 'apt-cache search mysql'  That 
should bring up some programs you can install from.


nick



- Original Message - 
From: Daniel da Veiga [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Friday, July 07, 2006 1:12 PM
Subject: Re: Rekall



On 7/7/06, Timothy Murphy [EMAIL PROTECTED] wrote:

I've used the KDE program rekall to a small extent in the past,
in order to set up and populate MySQL tables on a remote machine.
It has recently been seg-faulting:

[EMAIL PROTECTED] ~]$ rekall
KBLocation::setCacheSize: set to 0MB
Segmentation fault

In any case, I didn't find it very easy to use.
I wonder if anyone can suggest an alternative -
a GUI Linux application
which allows data to be entered in an SQL table.
My preference would be for Python and KDE,
but those aren't essential.



Have you tried MySQL Administrator and Query Browser?

--
Daniel da Veiga
Computer Operator - RS - Brazil
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GCM/IT/P/O d-? s:- a? C++$ UBLA++ P+ L++ E--- W+++$ N o+ K- w O M- V-
PS PE Y PGP- t+ 5 X+++ R+* tv b+ DI+++ D+ G+ e h+ r+ y++
--END GEEK CODE BLOCK--

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date functions

2006-07-07 Thread Chris W

Addison, Mark wrote:


From: Chris W  Sent: 07 July 2006 09:23
 

It's late and I just gave up reading the manual.  Can someone please 
tell me the easiest way to do a query that will return all 
rows with a 
time stamp that is X number of seconds older than the current time?  
Something like this.


SELECT * FROM t
WHERE TimeCol  (now() - 60*60*24*3)

Yes I know that is just 3 days but other times I will want to find 
records that are a few hours old so I like using the formula.
   



SELECT * FROM t
WHERE TimeCol  DATE_SUB(CURDATE(), INTERVAL 60*60*24*3 SECOND);
 



Maybe it was just too late at night but I read about the DATE_SUB 
function in the manual and got the impression that it ignored the time 
part of a date time field so I could not use it for finding records only 
a few hours old.


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  
give the gifts they want
One stop wish list for any gift, 
from anywhere, for any occasion!

http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Got errno 27 on write. file too large

2006-07-07 Thread Daniel da Veiga

On 7/7/06, Duhaime Johanne [EMAIL PROTECTED] wrote:

Thank you to both of you.

Just a add-on, already mentionned to Michael:
My file system allows illimited files size.
I have a file of 11G on my partition.
mercure{root}120: du -kh mercure.log.jui2006
  11G   mercure.log.jui2006
The owner of this file is mysql (it is the log file).

Also the sysadmin was able to create files of big size with the mkfile command 
with Mysql account.


I have gone througt the change history looking for  the work mysqldump and I 
have not seen anythng related.

So I guest my best guess is to install Mysql 5.



Sorry, but I think this isn't going to save you.
The error you mention (as Greg told you) is not a MySQL error, it is a
kernel error reported when dealing with files larger than 2GB. This is
specially true if you're using an old kernel, or old filesystem (but
filesystem is not important in this case, only if you use a really OLD
fs). Your tests are no good for this matter, as creating a file and
writting it is a complete different task. Check:

http://en.wikipedia.org/wiki/Wikipedia:Database_dump_import_problems#Got_error_27_from_table_handler

Please report your kernel version (output of uname -a) and
filesystem types (you can post your /etc/fstab file). You must use a
current kernel in order to use large files.

This may help you too:

http://cbbrowne.com/info/fs.html



Best  regards

-Message d'origine-
De : Greg 'groggy' Lehey [mailto:[EMAIL PROTECTED]
Envoyé : Wednesday, 05 July 2006 22:04
À : Duhaime Johanne; Michael Stassen
Cc : mysql@lists.mysql.com; Dominik Klein
Objet : Re: mysqldump: Got errno 27 on write. file too large

On Wednesday,  5 July 2006 at  9:12:52 -0400, Duhaime Johanne wrote:
 I have musql 4.1.7 on Solaris 9, 64 bits and I want to mysqldump a +-4
 gigas db.

 ...

 The full directory that contains the *.frm, *.MYD,*.MYI files has the
 following size:
 du -ks  /seqdata/mysql/autres_bds/regen
 3702719 /seqdata/mysql/autres_bds/regen ... I get the output du -k:
 2098184 myregendump

 this error supposed to be:
 bin/perror 27
 Error code  27:  File too large
 As you can see I have plenty of space.

Error codes below 128 come from the kernel.  It's possible for applications to 
return error numbers in this range too, but it's not a good idea, and mysqldump 
doesn't do it.  So whatever's happening here, it's being reported by the kernel.

There are two numbers:

#define EFBIG   27  /* File too large */
#define ENOSPC  28  /* No space left on device */

EFBIG refers to limitations in the size of one file; you can get it even if 
there's plenty of space in the file system.  ENOSPC is the other way round: you 
can get it even if the file isn't at its maximum allowed size.

 In  the error file I have multiple times the line:
 InnoDB: Error: unable to create temporary file; errno: 2
 mercure{mysql}66: bin/perror 2
 Error code   2:  No such file or directory
 But  the directory exist.

I'd guess that it doesn't.  Unfortunately the message doesn't tell you which 
file it's trying to create.  This might be worth a bug report, since it 
seriously hinders you in finding out what that particular problem is.

Why does this not show up in your verbose mysqldump?


 Then I tried a verbose mysqldump.

 mercure{mysql}73: /seqweb/mysql/bin/mysqldump --opt --verbose regen 
 /seqdata/mysql/myregendump
 -- Connecting to localhost...
 -- Retrieving table structure for table cpgisland_Human_May2004...
 -- Sending SELECT query...
 -- Retrieving rows...
 ...
  21 tables
 -- Retrieving table structure for table unit_occurence_Human_May2004...
 -- Sending SELECT query...
 -- Retrieving rows...
 /seqweb/mysql/bin/mysqldump: Got errno 27 on write This table is 1
 giga data and 500mb index.

Note that mysqldump is not very efficient in its format.  How big was the 
output file when it failed?  I'd hazard a guess at 2 GB (specifically, 
2147483647 bytes).  If this is the case, it's definitely a file system 
limitation.

 Then I tried a mysqldump of this table only:
 /seqweb/mysql/bin/mysqldump --opt --verbose regen
 unit_occurence_Human_May2004
 and it works fine.

 How can I solve this problem?

Well, you've found one workaround :-)

What file system are you using?  Could this be (Sun's old) UFS?

I'm sure that Sun has file systems that aren't limited to 2 GB; you could use 
one of them.  They'll probably give you other advantages too.

 I have looked at the previous message in the forum but could not find
 anything answering my problem.

I'm relatively confident that this isn't a mysqldump problem.

On Wednesday,  5 July 2006 at 12:28:53 -0400, Michael Stassen wrote:

 My first thought is that Dominik is on the right track.  I get

 : perror 27
   OS error code  27:  File too large

 which suggests there is some OS limitation.  Perhaps the user running
 mysqldump is limited?  Do you have any larger files owned by the same
 user?  Can that user currently create a file larger 

is it possible to estimate a backup file size?

2006-07-07 Thread Sergei S
Hi all,

I'm trying to figure out how much space would be necessary for the mysqldump 
with the -opt option.. The inodb tablespace is using roughly 130 G, plus maybe 
5 G for various myisam files. 

Is it possible to get even a rough estimate?

Thanks in advance.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Rekall

2006-07-07 Thread Timothy Murphy
On Friday 07 July 2006 18:28, Nicholas Vettese wrote:

 If you are using an apt-based system, type 'apt-cache search mysql'  That
 should bring up some programs you can install from.

I'm actually running Fedora.
I did say yum install *mysql*
which installed everything with mysql in its name -
quite a large number of packages,
which I haven't got round to examining yet!


-- 
Timothy Murphy  
e-mail (80k only): tim /at/ birdsnest.maths.tcd.ie
tel: +353-86-2336090, +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How does one speed up delete.

2006-07-07 Thread Jacob, Raymond A Jr
Env: Freebsd 6.0
MySql 4.1.18 
Mem: 1GB(?) can not tell without rebooting
Disk Avail: 4GB

Problem: the table data is 4.5GB.
I created a temporary table sidtemp in the database snort by typing:

CREATE TEMPORARY TABLE sidtemp
SELECT cid FROM event
WHERE timestamp  '2006-05-01';

Query OK, 7501376 rows affected (36.38 sec)
Records: 7501376 Duplicates: 0 Warnings: 0

Next I want to delete all rows from the table data when data.cid =
sidtemp.cid
So I started the following command on Jul 5 at 16:44 GMT:
DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid

It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run this
delete
 command as I recall it takes 15-20days on just one database. I have
two(2)
Databases with the same schema. The databases are live now and
Usually without executing this delete mysql  uses between 0-10%
Of the CPU. The delete is causing the mysql to use between 98-99% of the

CPU.

Any ideas on what I can do to speed up the Delete?

Thank you
Raymond


Re: How does one speed up delete.

2006-07-07 Thread Dan Buettner

Raymond, I would expect that adding an index on 'cid' column in your
'sidtemp' table would help quite a bit.

Out of curiousity, why use a temp table in this situation?  Why not

DELETE data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-05-01

Dan

On 7/7/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:

Env: Freebsd 6.0
MySql 4.1.18
Mem: 1GB(?) can not tell without rebooting
Disk Avail: 4GB

Problem: the table data is 4.5GB.
I created a temporary table sidtemp in the database snort by typing:

CREATE TEMPORARY TABLE sidtemp
SELECT cid FROM event
WHERE timestamp  '2006-05-01';

Query OK, 7501376 rows affected (36.38 sec)
Records: 7501376 Duplicates: 0 Warnings: 0

Next I want to delete all rows from the table data when data.cid =
sidtemp.cid
So I started the following command on Jul 5 at 16:44 GMT:
DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid

It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run this
delete
 command as I recall it takes 15-20days on just one database. I have
two(2)
Databases with the same schema. The databases are live now and
Usually without executing this delete mysql  uses between 0-10%
Of the CPU. The delete is causing the mysql to use between 98-99% of the

CPU.

Any ideas on what I can do to speed up the Delete?

Thank you
Raymond




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump - dump file per table?

2006-07-07 Thread Dan Buettner

I'm preparing to implement some mysqldump-based backups, and would
really like to find an easy way to dump out one SQL file per table,
rather than single massive SQL file with all tables from all
databases.

In other words, if I have database DB1 with tables TBL1 and TBL2, and
database DB2 with tables TBL3 and TBL4, I'd end up with files named
something like this, containing just the table create and data for
each:

20060707.DB1.TBL1.sql
20060707.DB1.TBL2.sql
20060707.DB2.TBL3.sql
20060707.DB2.TBL4.sql

This would make selective restores a lot easier, and would also allow
us to set up development/testing environments more easily than one big
file.

I'd use mysqlhotcopy but we're in an InnoDB environment.

I can implement this with a little perl script but wondered if anyone
was aware of a tool out there already?

Dan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump - dump file per table?

2006-07-07 Thread Dan Buettner

Yes, that's what I'm after, as I know it will do individual tables ...
but I'd like it to do one file for each and every table within each
and every database, without having to maintain a batch script with
multiple calls to mysqldump specifying them all.

It'd be something like

mysqldump -u user -psecret --all-databases --file-per-table
--output-dir=/path/to/backups

Dan


On 7/7/06, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

mysqldump will dump a database or just a table, just depends on what
you specify.

   mysqldump [OPTIONS] database [tables]

of course, if you want to automate this and don't know the table (or
database) names in advance you'd need to do something (e.g., a
mysqlshow) to get that list first.

is that what you're after, or am i missing something?


  - Rick



 Original Message 
 Date: Friday, July 07, 2006 02:53:11 PM -0500
 From: Dan Buettner [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Subject: mysqldump - dump file per table?

 I'm preparing to implement some mysqldump-based backups, and would
 really like to find an easy way to dump out one SQL file per table,
 rather than single massive SQL file with all tables from all
 databases.

 In other words, if I have database DB1 with tables TBL1 and TBL2, and
 database DB2 with tables TBL3 and TBL4, I'd end up with files named
 something like this, containing just the table create and data for
 each:

 20060707.DB1.TBL1.sql
 20060707.DB1.TBL2.sql
 20060707.DB2.TBL3.sql
 20060707.DB2.TBL4.sql

 This would make selective restores a lot easier, and would also allow
 us to set up development/testing environments more easily than one big
 file.

 I'd use mysqlhotcopy but we're in an InnoDB environment.

 I can implement this with a little perl script but wondered if anyone
 was aware of a tool out there already?

 Dan

-- End Original Message --




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How does one speed up delete.

2006-07-07 Thread Jacob, Raymond A Jr
 

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 07, 2006 15:48
To: Jacob, Raymond A Jr
Cc: mysql@lists.mysql.com
Subject: Re: How does one speed up delete.

Raymond, I would expect that adding an index on 'cid' column in your
'sidtemp' table would help quite a bit.

Out of curiousity, why use a temp table in this situation?  Why not

Dan:
   I had erroneously assumed that the delete would delete rows from data
and event. 
DELETE data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-05-01

I just stopped my previous query. I am running the above now.

I used a temporary table because I thought I only needed the table to
hold the events.cid's temporarily that
I wished to delete from the data table. Can you tell when I should use
temporary tables.

Below I believe command below demonstrates that an index exists on data
and event?

mysql show index from data;
+---++--+--+-+--
-+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation

| | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+--
-+-+--++--++-+
| data  |  0 | PRIMARY  |1 | sid | A
|   3 | NULL | NULL   |  | BTREE  | |
| data  |  0 | PRIMARY  |2 | cid | A
| 9678480 | NULL | NULL   |  | BTREE  | |
+---++--+--+-+--
-+-+--++--++-+
2 rows in set (0.00 sec)

mysql show index from event;
+---++--+--+-+--
-+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation

| | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+--
-+-+--++--++-+
| event |  0 | PRIMARY  |1 | sid | A
|NULL | NULL | NULL   |  | BTREE  | |
| event |  0 | PRIMARY  |2 | cid | A
|14389173 | NULL | NULL   |  | BTREE  | |
| event |  1 | sig  |1 | signature   | A
|NULL | NULL | NULL   |  | BTREE  | |
| event |  1 | time |1 | timestamp   | A
|NULL | NULL | NULL   |  | BTREE  | |
+---++--+--+-+--
-+-+--++--++-+
4 rows in set (0.00 sec)

Thank you,
 raymond
On 7/7/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:
 Env: Freebsd 6.0
 MySql 4.1.18
 Mem: 1GB(?) can not tell without rebooting Disk Avail: 4GB

 Problem: the table data is 4.5GB.
 I created a temporary table sidtemp in the database snort by typing:

 CREATE TEMPORARY TABLE sidtemp
 SELECT cid FROM event
 WHERE timestamp  '2006-05-01';

 Query OK, 7501376 rows affected (36.38 sec)
 Records: 7501376 Duplicates: 0 Warnings: 0

 Next I want to delete all rows from the table data when data.cid = 
 sidtemp.cid So I started the following command on Jul 5 at 16:44 GMT:
 DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid

 It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run 
 this delete  command as I recall it takes 15-20days on just one 
 database. I have
 two(2)
 Databases with the same schema. The databases are live now and Usually

 without executing this delete mysql  uses between 0-10% Of the CPU. 
 The delete is causing the mysql to use between 98-99% of the

 CPU.

 Any ideas on what I can do to speed up the Delete?

 Thank you
 Raymond



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does one speed up delete.

2006-07-07 Thread Dan Buettner

Yes, I agree, it looks like there are indexes on the columns in
question in both tables.

As to when to use a temporary table - I haven't got a clear answer for you.

I wrote an app once that used temp tables, and it gained quite a speed
advantage due to my requirement to first insert a bunch of data, then
update it, then move it into a real table.  It was much faster
overall than running the same operations against the real table.

However, the use of actual temporary tables became problematic during
replication from time to time (replication would break sometimes), so
I scrapped that and instead created a permanent MyISAM table that I
treated like a temp table.

Dan



On 7/7/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:



-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Friday, July 07, 2006 15:48
To: Jacob, Raymond A Jr
Cc: mysql@lists.mysql.com
Subject: Re: How does one speed up delete.

Raymond, I would expect that adding an index on 'cid' column in your
'sidtemp' table would help quite a bit.

Out of curiousity, why use a temp table in this situation?  Why not

Dan:
   I had erroneously assumed that the delete would delete rows from data
and event.
DELETE data
FROM data, event
WHERE data.cid = event.cid
AND event.timestamp  2006-05-01

I just stopped my previous query. I am running the above now.

I used a temporary table because I thought I only needed the table to
hold the events.cid's temporarily that
I wished to delete from the data table. Can you tell when I should use
temporary tables.

Below I believe command below demonstrates that an index exists on data
and event?

mysql show index from data;
+---++--+--+-+--
-+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation

| | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+--
-+-+--++--++-+
| data  |  0 | PRIMARY  |1 | sid | A
|   3 | NULL | NULL   |  | BTREE  | |
| data  |  0 | PRIMARY  |2 | cid | A
| 9678480 | NULL | NULL   |  | BTREE  | |
+---++--+--+-+--
-+-+--++--++-+
2 rows in set (0.00 sec)

mysql show index from event;
+---++--+--+-+--
-+-+--++--++-+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation

| | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+---++--+--+-+--
-+-+--++--++-+
| event |  0 | PRIMARY  |1 | sid | A
|NULL | NULL | NULL   |  | BTREE  | |
| event |  0 | PRIMARY  |2 | cid | A
|14389173 | NULL | NULL   |  | BTREE  | |
| event |  1 | sig  |1 | signature   | A
|NULL | NULL | NULL   |  | BTREE  | |
| event |  1 | time |1 | timestamp   | A
|NULL | NULL | NULL   |  | BTREE  | |
+---++--+--+-+--
-+-+--++--++-+
4 rows in set (0.00 sec)

Thank you,
 raymond
On 7/7/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:
 Env: Freebsd 6.0
 MySql 4.1.18
 Mem: 1GB(?) can not tell without rebooting Disk Avail: 4GB

 Problem: the table data is 4.5GB.
 I created a temporary table sidtemp in the database snort by typing:

 CREATE TEMPORARY TABLE sidtemp
 SELECT cid FROM event
 WHERE timestamp  '2006-05-01';

 Query OK, 7501376 rows affected (36.38 sec)
 Records: 7501376 Duplicates: 0 Warnings: 0

 Next I want to delete all rows from the table data when data.cid =
 sidtemp.cid So I started the following command on Jul 5 at 16:44 GMT:
 DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid

 It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run
 this delete  command as I recall it takes 15-20days on just one
 database. I have
 two(2)
 Databases with the same schema. The databases are live now and Usually

 without executing this delete mysql  uses between 0-10% Of the CPU.
 The delete is causing the mysql to use between 98-99% of the

 CPU.

 Any ideas on what I can do to speed up the Delete?

 Thank you
 Raymond



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To 

RE: mysqldump: Got errno 27 on write. file too large

2006-07-07 Thread Duhaime Johanne
Thank you

Of course this is a bad new. Much easier to upgrade to Mysql 5 than to change 
the Kernel.



uname -a
SunOS mercure 5.9 Generic_118558-04 sun4u sparc SUNW,Ultra-80

I do not have fstab but vfstab
mercure{root}171: more vfstab
#device device  mount   FS  fsckmount   mount
#to mount   to fsck point   typepassat boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr  ufs 1   yes -
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/dsk/c0t0d0s1   -   -   swap-   no  -
/dev/dsk/c0t1d0s1   -   -   swap-   no  -
/dev/dsk/c0t0d0s0   /dev/rdsk/c0t0d0s0  /   ufs 1   no  
-
/dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3  /varufs 1   no  
-
/dev/dsk/c0t0d0s4   /dev/rdsk/c0t0d0s4  /optufs 1   yes 
-
/dev/dsk/c2t5d0s0   /dev/rdsk/c2t5d0s0  /seqweb ufs 1   yes 
-
/dev/dsk/c2t5d0s5   /dev/rdsk/c2t5d0s5  /oracle ufs 1   yes 
-
/dev/dsk/c2t5d0s6   /dev/rdsk/c2t5d0s6  /dbsufs 1   yes 
-
/dev/dsk/c1t5d0s2   /dev/rdsk/c1t5d0s2  /seqdataufs 1   
yes -
terre:/opt/net  -   /opt/netnfs -   yes bg,soft,retry=10
#terre:/opt/samba  -   /opt/sambanfs -   yes 
bg,soft,retry=10
terre:/opt/CiscoSecure  -   /opt/CiscoSecurenfs -   yes 
bg,soft,retry=10
#terre:/archives -   /archives   nfs -   yes 
bg,soft,retry=10
titan:/var/mail  -   /var/mail   nfs -   yes 
actimeo=0,bg,soft,retry=10
swap-   /tmptmpfs   -   yes -




-Message d'origine-
De : Daniel da Veiga [mailto:[EMAIL PROTECTED] 
Envoyé : Friday, 07 July 2006 13:38
À : mysql@lists.mysql.com
Objet : Re: mysqldump: Got errno 27 on write. file too large

On 7/7/06, Duhaime Johanne [EMAIL PROTECTED] wrote:
 Thank you to both of you.

 Just a add-on, already mentionned to Michael:
 My file system allows illimited files size.
 I have a file of 11G on my partition.
 mercure{root}120: du -kh mercure.log.jui2006
   11G   mercure.log.jui2006
 The owner of this file is mysql (it is the log file).

 Also the sysadmin was able to create files of big size with the mkfile 
 command with Mysql account.


 I have gone througt the change history looking for  the work mysqldump and I 
 have not seen anythng related.

 So I guest my best guess is to install Mysql 5.


Sorry, but I think this isn't going to save you.
The error you mention (as Greg told you) is not a MySQL error, it is a kernel 
error reported when dealing with files larger than 2GB. This is specially true 
if you're using an old kernel, or old filesystem (but filesystem is not 
important in this case, only if you use a really OLD fs). Your tests are no 
good for this matter, as creating a file and writting it is a complete 
different task. Check:

http://en.wikipedia.org/wiki/Wikipedia:Database_dump_import_problems#Got_error_27_from_table_handler

Please report your kernel version (output of uname -a) and filesystem types 
(you can post your /etc/fstab file). You must use a current kernel in order to 
use large files.

This may help you too:

http://cbbrowne.com/info/fs.html


 Best  regards

 -Message d'origine-
 De : Greg 'groggy' Lehey [mailto:[EMAIL PROTECTED] Envoyé : Wednesday, 
 05 July 2006 22:04 À : Duhaime Johanne; Michael Stassen Cc : 
 mysql@lists.mysql.com; Dominik Klein Objet : Re: mysqldump: Got errno 
 27 on write. file too large

 On Wednesday,  5 July 2006 at  9:12:52 -0400, Duhaime Johanne wrote:
  I have musql 4.1.7 on Solaris 9, 64 bits and I want to mysqldump a 
  +-4 gigas db.
 
  ...
 
  The full directory that contains the *.frm, *.MYD,*.MYI files has 
  the following size:
  du -ks  /seqdata/mysql/autres_bds/regen
  3702719 /seqdata/mysql/autres_bds/regen ... I get the output du -k:
  2098184 myregendump
 
  this error supposed to be:
  bin/perror 27
  Error code  27:  File too large
  As you can see I have plenty of space.

 Error codes below 128 come from the kernel.  It's possible for applications 
 to return error numbers in this range too, but it's not a good idea, and 
 mysqldump doesn't do it.  So whatever's happening here, it's being reported 
 by the kernel.

 There are two numbers:

 #define EFBIG   27  /* File too large */
 #define ENOSPC  28  /* No space left on device */

 EFBIG refers to limitations in the size of one file; you can get it even if 
 there's plenty of space in the file system.  ENOSPC is the other way round: 
 you can get it even if the file isn't at its maximum allowed size.

  In  the error file I have multiple times the line:
  InnoDB: Error: unable to create temporary file; errno: 2
  mercure{mysql}66: bin/perror 2
  Error code   2:  No such file or 

Re: mysqldump: Got errno 27 on write. file too large

2006-07-07 Thread Daniel da Veiga

On 7/7/06, Duhaime Johanne [EMAIL PROTECTED] wrote:

Thank you

Of course this is a bad new. Much easier to upgrade to Mysql 5 than to change 
the Kernel.



Yeah, unfortunatelly it is...




uname -a
SunOS mercure 5.9 Generic_118558-04 sun4u sparc SUNW,Ultra-80


Not so good news, I'm not an expert in this kind of kernel (sparc),
but I guess its been a while isince you upgraded it, isnt? If I'm
wrong, please excuse my complete lack of knowledge on this systems.
I've done a quick research and it appears to fall in the problems
described at the wiki I pointed to you.



I do not have fstab but vfstab
mercure{root}171: more vfstab
#device device  mount   FS  fsckmount   mount
#to mount   to fsck point   typepassat boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr  ufs 1   yes -
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/dsk/c0t0d0s1   -   -   swap-   no  -
/dev/dsk/c0t1d0s1   -   -   swap-   no  -
/dev/dsk/c0t0d0s0   /dev/rdsk/c0t0d0s0  /   ufs 1   no  
-
/dev/dsk/c0t0d0s3   /dev/rdsk/c0t0d0s3  /varufs 1   no  
-
/dev/dsk/c0t0d0s4   /dev/rdsk/c0t0d0s4  /optufs 1   yes 
-
/dev/dsk/c2t5d0s0   /dev/rdsk/c2t5d0s0  /seqweb ufs 1   yes 
-
/dev/dsk/c2t5d0s5   /dev/rdsk/c2t5d0s5  /oracle ufs 1   yes 
-
/dev/dsk/c2t5d0s6   /dev/rdsk/c2t5d0s6  /dbsufs 1   yes 
-
/dev/dsk/c1t5d0s2   /dev/rdsk/c1t5d0s2  /seqdataufs 1   
yes -
terre:/opt/net  -   /opt/netnfs -   yes bg,soft,retry=10
#terre:/opt/samba  -   /opt/sambanfs -   yes 
bg,soft,retry=10
terre:/opt/CiscoSecure  -   /opt/CiscoSecurenfs -   yes 
bg,soft,retry=10
#terre:/archives -   /archives   nfs -   yes 
bg,soft,retry=10
titan:/var/mail  -   /var/mail   nfs -   yes 
actimeo=0,bg,soft,retry=10
swap-   /tmptmpfs   -   yes -



As I thought, your filesystem seems OK, I keep my bet on kernel issue.
Anyone else?




-Message d'origine-
De : Daniel da Veiga [mailto:[EMAIL PROTECTED]
Envoyé : Friday, 07 July 2006 13:38
À : mysql@lists.mysql.com
Objet : Re: mysqldump: Got errno 27 on write. file too large

On 7/7/06, Duhaime Johanne [EMAIL PROTECTED] wrote:
 Thank you to both of you.

 Just a add-on, already mentionned to Michael:
 My file system allows illimited files size.
 I have a file of 11G on my partition.
 mercure{root}120: du -kh mercure.log.jui2006
   11G   mercure.log.jui2006
 The owner of this file is mysql (it is the log file).

 Also the sysadmin was able to create files of big size with the mkfile 
command with Mysql account.


 I have gone througt the change history looking for  the work mysqldump and I 
have not seen anythng related.

 So I guest my best guess is to install Mysql 5.


Sorry, but I think this isn't going to save you.
The error you mention (as Greg told you) is not a MySQL error, it is a kernel 
error reported when dealing with files larger than 2GB. This is specially true 
if you're using an old kernel, or old filesystem (but filesystem is not 
important in this case, only if you use a really OLD fs). Your tests are no 
good for this matter, as creating a file and writting it is a complete 
different task. Check:

http://en.wikipedia.org/wiki/Wikipedia:Database_dump_import_problems#Got_error_27_from_table_handler

Please report your kernel version (output of uname -a) and filesystem types 
(you can post your /etc/fstab file). You must use a current kernel in order to use large 
files.

This may help you too:

http://cbbrowne.com/info/fs.html


 Best  regards

 -Message d'origine-
 De : Greg 'groggy' Lehey [mailto:[EMAIL PROTECTED] Envoyé : Wednesday,
 05 July 2006 22:04 À : Duhaime Johanne; Michael Stassen Cc :
 mysql@lists.mysql.com; Dominik Klein Objet : Re: mysqldump: Got errno
 27 on write. file too large

 On Wednesday,  5 July 2006 at  9:12:52 -0400, Duhaime Johanne wrote:
  I have musql 4.1.7 on Solaris 9, 64 bits and I want to mysqldump a
  +-4 gigas db.
 
  ...
 
  The full directory that contains the *.frm, *.MYD,*.MYI files has
  the following size:
  du -ks  /seqdata/mysql/autres_bds/regen
  3702719 /seqdata/mysql/autres_bds/regen ... I get the output du -k:
  2098184 myregendump
 
  this error supposed to be:
  bin/perror 27
  Error code  27:  File too large
  As you can see I have plenty of space.

 Error codes below 128 come from the kernel.  It's possible for applications 
to return error numbers in this range too, but it's not a good idea, and mysqldump 
doesn't do it.  So whatever's happening here, it's being reported by the kernel.

 There are two numbers:

 #define EFBIG   27  /* File too large */
 

last_insert_id problem

2006-07-07 Thread Afshad Dinshaw

Hi
Im using the latest version of mysql.
When I run the following query :

select last_insert_id()

if get the error message:

function vcontacts.last_insert_id does not exist

note: vcontacts is the name of my database.


anyone know why?

thanks



Re: last_insert_id problem

2006-07-07 Thread John L Meyer

Afshad Dinshaw wrote:

Hi
Im using the latest version of mysql.
When I run the following query :

select last_insert_id()

if get the error message:

function vcontacts.last_insert_id does not exist

note: vcontacts is the name of my database.


anyone know why?

thanks


I can run it both uppercase and lowercase (mysql 4.1.13-Max).



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]