using temporary / using filesort and disk tables

2003-11-19 Thread Arnaud
Hi!

I have a query that allways creates temporary tables to disk (the ratio
created_tmp_disk_tables/created_tmp_tables is close to 0.8, but I think it
is 1 for this particular query).
This query joins 4 tables, groups by a field a orders by another field (or
sometimes an sql variable).
When I analyze it, I get where used, using temporary, using filesort for
the first table of the join (whatever the order of the join), and where
used for the other ones.
I have only 2000 rows scanned forthe first table, and 1 for the 3 other
ones.
The variables tmp_table_size and max_heap_table_size are both set very high
(~512M).

I would like to get rid of those disk tables, to improve the performance of
my query. I understand that using a group by and order by on different
fields implies the use of a temporary table. What I don't understand is why
this table is created on disk, and not in memory? Is it because of the
filesort? If yes, how could I get rid of the filesort?

If this is not clear enough, I can post a sample query and the result of the
explain.

Thanks in advance!

Arnaud


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



functions in libmysqlclient.a conflicting with my own!

2003-11-19 Thread Bo Do
Hello all,
 
I'm using mysql 3.23.49 under Debian Linux and gcc 2.95.4.
 
I'm writing a mysql client in C, and therefore I link libmysqlclient.a in my program.
 
The problem is that I also use and link a static list-handling library I've written 
some time ago.
And of course problem arises with multiple definitions of various list functions!! 
Once in the mysqlclient
library and once in my own list library.
 
I don't use any list functions in the mysqlclient library, I didn't even knew they 
existed until now!
 
So my question is how I can compile and link my program, so it uses my own list 
functions and
not the list functions in mysqlclient?
Right now, it won't link because of multiple definition of some of the list 
functions.
 
I can't change the function names in my own list library either, because a lot of 
other applications
are using it!
 
Thankfull for any help or tip that could help me solve my problem!!
 
Regards,
/Bo

p.s. This is my second attempt making this post.
The last one just dissapeared!
I apologize for any double postings.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL [Q] how to migrate 4.0 - 4.1

2003-11-19 Thread Victoria Reznichenko
Andrey Kotrekhov [EMAIL PROTECTED] wrote:
 Andrey Kotrekhov [EMAIL PROTECTED] wrote:
  Hi, All.
  Other question is:
  Can I create case sensitive field with right ordering of national
  characters.
 

 Take a look at collation name: ci at the end of the collation name mean=
 s case insensitive, cs - case sensitive, bin - binary:
 =09http://www.mysql.com/doc/en/Charset-MySQL.html

 
 Thank you. But what is about binary fields?
 When I start mysqld-4.0.. field in table are char(x) binary.
 But when I start mysql-4.1.0 in the same table the same field is not
 binary.
 Is this bug?

Nope. From the v4.1 BINARY means that no collation is applicable to the column.

 Or is there right way to restore binary flag on field.

Use collation with _bin at the end of collation name.


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Victoria Reznichenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



very slow delete queries - never ending

2003-11-19 Thread riga . riga
I have a problem with a bigger table on mysql 4.0.16-log / debian linux

I played around with indexes, delete quick and such, but I just can't get it
to work. 
The following table holds 35mio rows and has 5mio inserts/replaces per
day. to clean it up I want to delete all rows older than X days.

I would be very happy if somebody could help me on this. I'm stuck. I worked
with tables of that size with 3.23.49-log and didn't have problems, although
I must say that the amount of inserts is very high in this case.

The server is a 2.5ghz pentium4, 1.5gb RAM, SCSI-RAID-disks and such
hardware, so performance should not be a problem. what variables in mysql should I
modify, has anybody experience with that and can
help?

thanks!
Richard

+++--+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment
|
+++--+--+-+---+-+--++--++-+
| datatable  |  0 | PRIMARY  |1 | ixno| A   
 |NULL | NULL | NULL   |  | BTREE  | |
| datatable  |  0 | PRIMARY  |2 | srcno   | A   
 |NULL | NULL | NULL   |  | BTREE  | |
| datatable  |  0 | PRIMARY  |3 | acttime | A   
 |NULL | NULL | NULL   |  | BTREE  | |
| datatable  |  0 | PRIMARY  |4 | tino| A   
 |35919333 | NULL | NULL   |  | BTREE  | |
| datatable  |  1 | dzeit|1 | acttime | A   
 |  119333 | NULL | NULL   |  | BTREE  |
|
+++--+--+-+---+-+--++--++-+


mysql explain
datatable;
+---+--+--+-+-+---+
| Field | Type | Null | Key | Default |
Extra
|
+---+--+--+-+-+---+
| ixno  | int(11) unsigned |  | PRI | 0   | 
 |
| srcno | smallint(6) unsigned |  | PRI | 0   | 
 |
| acttime   | datetime |  | PRI | -00-00 00:00:00 | 
 |
| tino  | int(10) unsigned |  | PRI | 0   | 
 |
| gl| double(10,4) | YES  | | NULL| 
 |
| gl_volumen| int(11)  | YES  | | NULL| 
 |
| bi| double(10,4) | YES  | | NULL| 
 |
| bi_volumen| int(11)  | YES  | | NULL| 

|
+---+--+--+-+-+---+
8 rows in set (0.00 sec)

mysql  select count(*) from  datatable  where acttime  '2003-11-14
09:39:49';
+--+
| count(*) |
+--+
|  7194367 |
+--+
1 row in set (3 min 22.15 sec)

mysql select count(*) from datatable;
+--+
| count(*) |
+--+
| 36003669 |
+--+
1 row in set (5.87 sec)

mysql delete quick  from datatable  where acttime  '2003-11-14 09:39:49';
or
mysql delete from datatable  where acttime  '2003-11-14 09:39:49';

...takes forever. I killed it after 20 hours...

-- 
GMX Weihnachts-Special: Seychellen-Traumreise zu gewinnen!

Rentier entlaufen. Finden Sie Rudolph! Als Belohnung winken
tolle Preise. http://www.gmx.net/de/cgi/specialmail/

+++ GMX - die erste Adresse für Mail, Message, More! +++


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sleeping connections

2003-11-19 Thread Egor Egorov
dan orlic [EMAIL PROTECTED] wrote:
 is there a way to close aging sleeeping connections after they have 
 slept for a certain amount of time? a setting or something in mysql?

Take a look at wait_timeout and interactive_timeout variables:
http://www.mysql.com/doc/en/SHOW_VARIABLES.html



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Easy (?) SELECT questions

2003-11-19 Thread Egor Egorov
Mark Wilson [EMAIL PROTECTED] wrote:
 Two related questions.
 1. I have a table with a date field. (Ex: '04/13/2002'). I want to get a list 
 of all UNIQUE values for that field (many entries from the same day), i.e., all 
 days with entries.
 **
 CREATE TABLE metrics_events_power {
  mep_id int(11) NOT NULL auto_increment,
  mep_date text,
  mep_time time DEFAULT '00:00:00' NOT NULL
 }
 INSERT INTO metrics_events_power VALUES ('1', '04/13/2002', '10:41:19');
 INSERT INTO metrics_events_power VALUES ('1', '04/13/2002', '10:46:19');
 INSERT INTO metrics_events_power VALUES ('1', '04/14/2002', '11:51:19');
 **
 
 I want a query that returns for this data '04/13/2002' and '04/14/2002'.

Use WHERE clause to set retrieval conditions for rows, f.e WHERE mep_date='04/13/2002' 
OR mep_date='04/14/2002', and GROUP BY mep_date.
Why do you use TEXT column type for the dates? You can use DATE column type and 
DATE_FORMAT() function:
http://www.mysql.com/doc/en/DATETIME.html
http://www.mysql.com/doc/en/Date_and_time_functions.html 

 
 2. Now I have X tables with those date fields. How do I get a list from all 
 specified tables with the unique dates from them?  For instance, 2 more tables:
 
 **
 CREATE TABLE arf_events_power {
  aep_id int(11) NOT NULL auto_increment,
  aep_date text,
  aep_time time DEFAULT '00:00:00' NOT NULL
 }
 INSERT INTO arf_events_power VALUES ('1', '05/13/2002', '10:41:19');
 INSERT INTO arf_events_power VALUES ('1', '05/24/2002', '10:46:19');
 INSERT INTO arf_events_power VALUES ('1', '06/21/2002', '11:51:19');
 
 CREATE TABLE blah_events_power {
  bep_id int(11) NOT NULL auto_increment,
  bep_date text,
  bep_time time DEFAULT '00:00:00' NOT NULL
 }
 INSERT INTO blah_events_power VALUES ('1', '04/13/2002', '10:41:19');
 INSERT INTO blah_events_power VALUES ('1', '04/13/2002', '10:46:19');
 INSERT INTO blah_events_power VALUES ('1', '07/28/2002', '11:51:19');
 **
 
 Should return 6 dates:
 '04/13/2002'
 '04/14/2002'
 '05/13/2002'
 '05/24/2002'
 '06/21/2002'
 '07/28/2002'

Use UNION:
http://www.mysql.com/doc/en/UNION.html



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Corruption

2003-11-19 Thread Neil Edgar
Description:
When we run the mysql database for any length of time, we end up with
corruptions such as:
/usr/local/mysql/bin/myisamchk: error: record delete-link-chain
corrupted MyISAM-table 'CollectedValue' is corrupted
 
CollectedValue is a table from our schema:
 
create table CollectedValue(oid varchar(255) binary not null,
instance varchar(80) binary not null,
timeOfCollection timestamp not null,
collectionName varchar(20) binary not null,
collectingNode varchar(30) binary not null,
cellId smallint not null,
value varchar(255) binary not null,
index(timeOfCollection),
index(collectingNode,cellId),
primary key(oid, instance, timeOfCollection, collectionName,
collectingNode));
 
However, we have seen the corruption on other tables.
 
After the corruption we get the following error trying to insert into
the table:
General error: Can't open file: 'CollectedValue.MYD'. (errno: 144)
SQLState: S1000 ErrorCode: 1016 query INSERT INTO CollectedValue ( oid,
instance, timeOfCollection, collectionName, collectingNode, cellId,
value) VALUES ('1.3.6.1.4.1.5586.3.2.2.2.5.3.1.5', '1.1',
'20031118235003', 'mike', '10.2.63.114', 1, '0')
 
 
How-To-Repeat:
We don't know what is causing these corruptions or how to reproduce them
 
Fix:
Close mysql and repair with /usr/loca/mysql/bin/myisamchk -r -f -o
CollectedValue
 
Submitter-Id:  [EMAIL PROTECTED]
Originator:Neil Edgar
Organization:  IPWireless
MySQL support: none
Synopsis:  error: record delete-link-chain corrupted MyISAM-table
Severity:  serious
Priority:  high
Category:  mysql
Class: sw-bug
Release:   mysql-3.23.56 (Official MySQL binary)
Server: /db/mysql/mysql/bin/mysqladmin  Ver 8.23 Distrib 3.23.56, for
sun-solaris2.8 on sparc
Copyright (C) 2000 MySQL AB  MySQL Finland AB  TCX DataKonsult AB
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL license
 
Server version  3.23.56
Protocol version10
Connection  Localhost via UNIX socket
UNIX socket /tmp/mysql.sock
Uptime: 2 days 18 hours 4 min 6 sec

Threads: 2  Questions: 285910  Slow queries: 12  Opens: 157  Flush
tables: 1  Open tables: 6 Queries per second avg: 1.202
Environment:
 
System: SunOS sun03 5.8 Generic_108528-03 sun4u sparc SUNW,Ultra-5_10
Architecture: sun4
 
Some paths:  /usr/bin/perl /usr/ucb/cc
 
Compilation info: CC='gcc'  CFLAGS='-O3 -fno-omit-frame-pointer'
CXX='gcc'  CXXFLAGS='-O3 -fno-omit-frame-pointer -felide-constructors
-fno-except
ions -fno-rtti'  LDFLAGS=''
LIBC:
-rw-r--r--   1 root bin  1749356 Jul 20  2000 /lib/libc.a
lrwxrwxrwx   1 root root  11 Jan 25  2001 /lib/libc.so -
./libc.so.1
-rwxr-xr-x   1 root bin  1135056 Jul 20  2000 /lib/libc.so.1
-rw-r--r--   1 root bin  1749356 Jul 20  2000 /usr/lib/libc.a
lrwxrwxrwx   1 root root  11 Jan 25  2001 /usr/lib/libc.so
- ./libc.so.1
-rwxr-xr-x   1 root bin  1135056 Jul 20  2000 /usr/lib/libc.so.1
Configure command: ./configure '--prefix=/usr/local/mysql'
'--with-comment=Official MySQL binary' '--with-extra-charsets=complex'
'--with-server-su
ffix=' '--enable-thread-safe-client' '--enable-local-infile'
'--enable-assembler' '--with-named-z-libs=no'
'--with-named-curses-libs=-lcurses' '--d
isable-shared' '--without-innodb' 'CC=gcc' 'CFLAGS=-O3
-fno-omit-frame-pointer' 'CXXFLAGS=-O3 -fno-omit-frame-pointer
-felide-constructors -fno-exc
eptions -fno-rtti' 'CXX=gcc'
Perl: This is perl, version 5.005_03 built for sun4-solaris



mysql_fix_privilege_tables script

2003-11-19 Thread Fernando Gomes Bernardino
Hi eveybody,

The mysql_fix_privilege tables can be undone? I have already re-install mysql server 
and nothing

Thanks a lot.

Fernando Bernardino

RE: mysqldump query

2003-11-19 Thread patrick kuah
Thanks...
I will try that :)
patrick


From: Victor Pendleton [EMAIL PROTECTED]
To: 'patrick kuah' [EMAIL PROTECTED],	Victor Pendleton 
[EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: RE: mysqldump query
Date: Mon, 17 Nov 2003 09:38:56 -0600

Are you running on a Linux platform?

File = .my.cnf
Location = ~
Contents =
line one: [client]
line two: user=userName
line three: password=userPassword
Security = chmod 600 .my.cnf
-Original Message-
From: patrick kuah [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2003 9:33 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: mysqldump query
Hi Victor,

Sorry...I'm not a SQL guy. Can advise me how to i procced with this???
Thanks :)
From: Victor Pendleton [EMAIL PROTECTED]
To: 'patrick kuah' [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: RE: mysqldump query
Date: Mon, 17 Nov 2003 07:06:18 -0600

you could put the password in your .my.cnf file and change permissions so
that only the MySQL user can read the file.

-Original Message-
From: patrick kuah [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2003 5:15 AM
To: [EMAIL PROTECTED]
Subject: mysqldump query


Hi guys,

I have configure a daily backup for mysql database. the problem is that i
need to specifiy the password when using mysql

mysqldump --all-databases --opt --password=1234567 testing.sql

how can i encypt this --password=1234567 
Thanks

patrick

_
Keep track of Singapore  Malaysia stock prices.
http://www.msn.com.sg/money/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
_
Download games, logos, wallpapers and lots more at MSN Mobile!
http://www.msn.com.sg/mobile/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
Keep track of Singapore  Malaysia stock prices. 
http://www.msn.com.sg/money/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


mysql progress enterprise db

2003-11-19 Thread Claes Wiberg
Hi,
I need to a find a way to migrate a progress 4gl application from progress own 
database enterprisedb to mysql. The thing is that progress 4gl doesn´t really speak 
sql. Has anyone found a solution to this dilemma? Some sort of odbc-driver that can do 
the trick?
brgds

Re: mysql progress enterprise db

2003-11-19 Thread John Nichel
Claes Wiberg wrote:

Hi,
I need to a find a way to migrate a progress 4gl application from progress own 
database enterprisedb to mysql. The thing is that progress 4gl doesn´t really speak 
sql. Has anyone found a solution to this dilemma? Some sort of odbc-driver that can do 
the trick?
brgds
Is this Progress 9?

--
By-Tor.com
It's all about the Rush
http://www.by-tor.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


table design of multi-lingual content

2003-11-19 Thread Centaur zeus
Hi all ,

  I want to design a table, say
T_PRODUCT { PRD_ID, NAME, DESCRIPTION, PRICE }
where I want NAME and DESCRIPTION to have multi language inputed.
I searched before where found two solutions :

1)
T_PRODUCT {PRD_ID, PRD_DETAIL_ID, PRICE}
T_PRODUCT_DETAIL { PRD_DEATIL_ID, LANG, NAME, DESCRIPTION }
2)
T_PRODUCT {PRD_ID, NAME_LANG_ID, DESCRIPTION_LANG_ID, PRICE}
T_LANG { LANG_ID, LANG, CONTENT }
But 1) seems to be a very trivial process where I have to have one 
additional table for each table for language. But I am afraid 2 will have 
too much join if multi-lingual attributes increase.
Is there a better design ? Please advise

Thanks.

Perseus

_
Help STOP SPAM with the new MSN 8 and get 2 months FREE*  
http://join.msn.com/?page=features/junkmail

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql_fix_privilege_tables script

2003-11-19 Thread Egor Egorov
Fernando Gomes Bernardino [EMAIL PROTECTED] wrote:
 
 The mysql_fix_privilege tables can be undone? I have already re-install mysql server 
 and nothing
 

Why do you want to do it?



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



JOIN vs INNER JOIN?

2003-11-19 Thread Yves Goergen
Can someone tell me the difference between a JOIN and an INNER JOIN of two
tables, please? I can't find the JOIN alone documented in the MySQL
manual.

-- 
Yves Goergen
[EMAIL PROTECTED]
Please don't CC me (causes double mails)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can't start server

2003-11-19 Thread aman raheja
did you try netstat and see if something else is using the port.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

I am sure that this problem has already been answered but some how I can't
figure it out.
When I try to start the mysqld I get the error 2002 can't start server: Bind
or TCP/IP port: Address already in use or...
I check that the damoen is not runnig (ps -ef |grep mysqld ) and shows
nothing running. But I just can start it.
Someone there can help me with easy step to fix this?

Thanks

 

--
Your favorite stores, helpful shopping tools and great gift ideas. 
Experience the convenience of buying online with [EMAIL PROTECTED] 
http://shopnow.netscape.com/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL 4.0.16 on RHEL3 AS IA64

2003-11-19 Thread Tomek Dudziak
Hi,

Did anyone get the 4.0.16 binary to work on RedHat 3 Enterprise for IA64?
I read that the same happens on RH3AS for AMD64.

This was thoroughly tested with our production team and it works truly
just fine.  according to MySQL support :-\

031102 22:07:17  mysqld started
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong
and this may fail.
 
key_buffer_size=8388600
read_buffer_size=131072
max_used_connections=0
max_connections=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 225791 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
031102 22:07:17  mysqld ended


-- 
Best regards,
 Tomek  mailto:[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow delete queries - never ending

2003-11-19 Thread gerald_clark
Perhaps you could add a limit to the delete, pause,  and  re-run until done.

[EMAIL PROTECTED] wrote:

snip

mysql delete quick  from datatable  where acttime  '2003-11-14 09:39:49';
or
mysql delete from datatable  where acttime  '2003-11-14 09:39:49';
...takes forever. I killed it after 20 hours...

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Tue, 2003-11-18 at 18:32, Paul DuBois wrote:
 Please reply to the list, not to me personally, so that
 others can follow this discussion.

that was my intention, sorry, I just used reply from my e-mail client
and realized too late that it did not default to the mysql list.
 
 At 17:05 -0500 11/18/03, Denis Mercier wrote:
 On Tue, 2003-11-18 at 16:40, Paul DuBois wrote:
   At 16:21 -0500 11/18/03, Denis Mercier wrote:
   here's what im trying to do, i have a tar file in a blob field
   and i'm trying to retrieve it and pipe it directly into tar
   to decompress it, without first writing it to the hard drive,
   
   here's what i've tried so far,
   I create a text file called test1:
   use my_db;
   select * into dumpfile /usr/local/test1 from my_table;
   
   so when i try shell mysql --pager  test1 | tar x
   
   the tar file does get written to /usr/local/test1 which is the step i'm
   trying to avoid, and nothing seems to get piped to tar?
 
   Right, because you've told the SELECT to write its output to
   /usr/local/test1, not to its standard output.  Hence, tar
   receives nothing.
 
   
   
   i also tried:
   use my_db;
   select * from my_table;
   
   so when i try shell mysql --pager  test1 | tar x
   
   the tar file does not get written to /usr/local/test1 but i still dont
   get my untared files? write permissions are ok?
   what am i missing?
 
   Have you verified that the mysql command actually writes any output, for
   example, with this command:
 
shell mysql --pager  test1 | wc
 
   Do you get any output from tar at all, such as an error message?
   Maybe the output from mysql isn't suitable.  Couple of things to try:
 
 The output from mysql seems fine because when it does write the value
 of my blob I get a well formatted tar file,
 I'm thinking that maybe tar cannot handle a stream as it's input?
 
 That may be.  In that case, you should invoke tar as
 
 tar xf -
 

I have tried tar xf but tar gives me an error, it wants a file
specified.

I'll keep trying different tactics and share my findings, there must be
an answer!

 rather than as
 
 tar x
 
 -- 
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com
 
 Are you MySQL certified?  http://www.mysql.com/certification/
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Need Help Upgrading From 4.x to 4.x

2003-11-19 Thread Mark Marshall
Hi, everyone.
 
I have a 4.0.4 beta install of Mysql on Red Hat 7.3.  I want to upgrade
it to 4.0.16, and keep all the data intact.  Do I just dump the
databases (just in case), stop the server, then ./configure, make, make
install over top of the old server and start it up again and see what
happens?
 
Thanks,
Mark


As of November 1st, 2003, Brandywine Senior Care's Corporate Office new contact 
information is:

Brandywine Senior Care, Inc.
525 Fellowship Road
Suite 360
Mt. Laurel, NJ 08054
(856) 813-2000 Phone
(856) 813-2020 Fax

**
This e-mail and any of its attachments may contain Brandywine Senior Care, Inc. 
proprietary information, which is privileged, confidential, or subject to copyright 
belonging to Brandywine Senior Care, Inc. 
This e-mail is intended solely for the use of the individual or entity to which it is 
addressed.  If you are not the intended recipient of this e-mail, you are hereby 
notified that any dissemination, distribution, copying, or action taken in relation to 
the contents of and attachments to this e-mail is strictly prohibited and may be 
unlawful.  If you have received this e-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this e-mail and any 
printout. Thank You.
**



Replacing Multiple Subqueries

2003-11-19 Thread Héctor Villafuerte D.
Hi all,
I know how to do this query with subqueries like this:
select * from traf_oper where
  rutasalien in
  (select ruta_salid from rutas where codigo_ope = 0)
  and
  rutaentran in
  (select ruta_salid from rutas where codigo_ope  0)
--

The table structures is like this:

mysql explain traf_oper;
++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| tel| char(8)  | YES  | MUL | NULL|   |
| fecha  | char(8)  | YES  | | NULL|   |
| hora   | char(6)  | YES  | | NULL|   |
| telefb | char(14) | YES  | | NULL|   |
| tiempotasa | char(6)  | YES  | | NULL|   |
| rutasalien | char(7)  | YES  | | NULL|   |
| rutaentran | char(7)  | YES  | | NULL|   |
| serie  | char(3)  | YES  | | NULL|   |
| tipotraf   | int(1)   | YES  | | NULL|   |
| minutos| int(4)   | YES  | | NULL|   |
++--+--+-+-+---+
10 rows in set (0.44 sec)
mysql explain rutas;
++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| CODIGO_TRA | int(6)   | YES  | | NULL|   |
| RUTA_SALID | char(20) | YES  | MUL | NULL|   |
| DESCRIPCIO | char(20) | YES  | | NULL|   |
| CODIGO_CIR | int(6)   | YES  | | NULL|   |
| TIPO_RUTA  | char(20) | YES  | | NULL|   |
| SISTEMA_TA | int(6)   | YES  | | NULL|   |
| CODIGO_OPE | int(6)   | YES  | | NULL|   |
| CORRELATIV | int(6)   | YES  | | NULL|   |
++--+--+-+-+---+
8 rows in set (0.08 sec)
--

I tried to do this:

mysql explain select a.* from traf_oper a join rutas b on a.rutasalien 
= b.ruta_salid where b.codigo_ope = 0 and
a.rutaentran = b.ruta_salid where b.codigo_ope  0;
ERROR 1064: You have an error in your SQL syntax.  Check the manual that 
corresponds to your MySQL server version
for the right syntax to use near 'where b.codigo_ope  0' at line 1

How can I substitute multiple subqueries with JOIN's?
Thanks in advance.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


pack_isam not working on Linux and HP-UX for mysql 4.0.15

2003-11-19 Thread mathan
Hello All

I can't run the pack_isam binary on Linux and HP-UX for mysql 4.0.15 version.
But it works fine for mysql 3.23.54a. 

===

Error on HP-UX (both 11.22 and 11.23) :

# pack_isam --join=/var/mysql/testdb/employee2 /var/mysql/testdb/employee 
/var/mysql/testdb/employee1

Compressing /var/mysql/testdb/employee2.ISD: (6 records)
- Calculating statistics
Memory fault(coredump)

===

Error on Linux (Red hat 9):

# pack_isam --join=/var/mysql/testdb/employee2 /var/mysql/testdb/employee 
/var/mysql/testdb/employee1

Compressing /var/mysql/testdb/employee2.ISD: (6 records)
- Calculating statistics
Segmentation fault

===

Where is the problem, is it in MySQL or on OS? Please help me to solve this problem.

Thanks,
Mathan

Re: mysql_fix_privilege_tables script

2003-11-19 Thread Paul DuBois
At 10:00 -0200 11/19/03, Fernando Gomes Bernardino wrote:
Hi eveybody,

The mysql_fix_privilege tables can be undone? I have already 
re-install mysql server and nothing
Why do you want to undo it?

You can undo it by restoring the grant tables from your most recent
backup.  But without knowing why you'd want to or what problems you're
encountering, it's difficult to say whether or not that's something
you should consider.
Thanks a lot.

Fernando Bernardino


--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: piping blob into shell command (tar)

2003-11-19 Thread Paul DuBois
At 11:03 -0500 11/19/03, Denis Mercier wrote:

i also tried:
   use my_db;
   select * from my_table;
   
   so when i try shell mysql --pager  test1 | tar x
   
   the tar file does not get written to /usr/local/test1 but i still dont
   get my untared files? write permissions are ok?
   what am i missing?
 
   Have you verified that the mysql command actually writes any output, for
   example, with this command:
 
shell mysql --pager  test1 | wc
 
   Do you get any output from tar at all, such as an error message?
   Maybe the output from mysql isn't suitable.  Couple of things to try:
 
 The output from mysql seems fine because when it does write the value
 of my blob I get a well formatted tar file,
 I'm thinking that maybe tar cannot handle a stream as it's input?
 That may be.  In that case, you should invoke tar as

 tar xf -

I have tried tar xf but tar gives me an error, it wants a file
specified.
But I didn't say to use tar xf.  I said to use tar xf -.  That
dash is highly significant.  It means The file to read is the
standard input.
--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql_client

2003-11-19 Thread aman raheja
IN order to access a database say with a name data_db you have to 
grant permissions to a remote user by doing the following on the server

GRANT ALL ON data_db.* TO [EMAIL PROTECTED] identified by 'password'
Few things to remember.
- Not a good idea to give access to root from remote clients
- Other information about GRANT can be obtained by searching GRANT on 
mysql.com

HIH

Regards

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Hi i have just installed the mysql_client
and i am trying to remotely connect to the database of
the other pc so i gave the following command
mysqlshow -h 192.168.64.12 -u [EMAIL PROTECTED]
mysqlshow: Host 'akroneiro' is not allowed to connect
to this M
What i msut do in order to gain access to the db? Hwo
i can create users ? Is there any easy way for that?
__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree
 

--
Your favorite stores, helpful shopping tools and great gift ideas. 
Experience the convenience of buying online with [EMAIL PROTECTED] 
http://shopnow.netscape.com/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: can't start server

2003-11-19 Thread aman raheja
Is this a production server?
THIS IS not a good advice but If not then just step down to a lower init 
level turning off the network services and then come back into level 3/5 
whatever you are using, depending on platform.

Then try to start the server. If you would tell what platform you are 
running on and what version of mysql you are using, will help you get 
the solution faster - not the restarting the services kind :)

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Aman, thank you for your reply.

I did check the ports with netstat and it appears that no other
application is using port 3306.
I am tempting to uninstall mysql and then reinstall it again but
I do not know how to uninstall it the easy way.
Thanks again

[EMAIL PROTECTED]

-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 9:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: can't start server
did you try netstat and see if something else is using the port.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

 

I am sure that this problem has already been answered but some how I can't
figure it out.
When I try to start the mysqld I get the error 2002 can't start server:
   

Bind
 

or TCP/IP port: Address already in use or...

I check that the damoen is not runnig (ps -ef |grep mysqld ) and shows
nothing running. But I just can start it.
Someone there can help me with easy step to fix this?

Thanks



   

--
Your favorite stores, helpful shopping tools and great gift ideas.
Experience the convenience of buying online with [EMAIL PROTECTED]
http://shopnow.netscape.com/
 

--
Your favorite stores, helpful shopping tools and great gift ideas. 
Experience the convenience of buying online with [EMAIL PROTECTED] 
http://shopnow.netscape.com/



replication, multi-table update work around

2003-11-19 Thread Ware Adams
We'd like to set up replication in a simple master/single slave setup, but
I can't figure out a way around an issue we'll have with multi table
updates.

The master will hold a set of databases, all of them will be replicated to
the slave.

The slave will hold the replicated databases plus a set of slave-only
databases.

To this point everything works fine.

In order to prevent updates on the replicated tables on the slave, we would
like to set up privileges so that users logging into the slave do not have
the update privilege.  In other words, on the slave:

update replicated_db.table set val=10 where id=1;

should fail.  This is easy enough to set up by just removing the update
privilege for these databases for our users on the slave machine.

However, we would like users of the slave to be able to perform updates on
their local, non-replicated tables using data from the replicated tables in
a multi-table update statement like this:

update non_replicated_db.table, replicated_db.table
set non_replicated_db.table.val=replicated_db.table.val
where
non_replicated_db.table.val_id=replicated_db.table.val_id;

So this query includes the replicated_db in the update statement, but only
actually writes to the non-replicated one.  Logically this is OK for our
setup since only the non-replicated table is altered.

However, the users on the slave machine don't have the update privilege for
the replicated_db, so this query fails, presumably b/c the privilege system
looks at all tables included in the update line rather than trying to
figure out which ones are actually changed (which would be a lot more
complex, I understand).

My question is, is there a privilege setup that will make this work?  If
not, is there a simple alternative to the multi table update statement?

I've thought of doing a 'replace into' in cases where the update is linked
on the primary key, or we could select out the records that match to a
temporary table, delete from the non-replicated table and read in from the
temp table.

Does anyone know of a more elegant solution or a solution via privileges?

Thanks,
Ware Adams

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



C API Prepared Statement Interface - MYSQL_BIND

2003-11-19 Thread Richard Tibbetts

I am finding the documentation located at
http://www.mysql.com/doc/en/C_API_Prepared_statement_datatypes.html on
the C API Prepared Statement Interface to be a bit unclear.

In the MYSQL_BIND datatype, when using mysql_bind_param() and
mysql_excute() to pass parameters to a prepared statment, how does one
specify the length of the parameters that will be passed? There are
two fields, buffer_length and length, both of are documented to
specify the length of input arguments.

For buffer_length we have: For character and binary C data, the
buffer_length value specifies the length of *buffer when used with
mysql_bind_param().

In turn for length we have For input parameter data binding, length
points to an unsigned long variable that indicates the length of the
parameter value stored in * buffer; this is used by mysql_execute()

One might conclude that mysql_bind_param() uses buffer_length to tell
the server about the max argument length, and that mysql_execute()
uses length to decide how much data to actually send for a given
invocation. Is this correct?

Thanks,
Richard Tibbetts

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimizing Custom Full Text Index

2003-11-19 Thread Mike Boone
Hi all,

I have used PHP and MySQL for years but have never tackled the amount of
data that I now have (and it's expected to grow considerably). My queries
run OK when the server is not busy but they just about kill the DB when
traffic picks up.

My apologies if this is common knowledge...I've had trouble searching on
custom full text indexing because it generally brings up hits regarding the
built-in full text indexing for various DB servers. MySQL's built-in
fulltext doesn't quite do what we want.

We are currently running MySQL 4.0.16 compiled with LinuxThreads on FreeBSD.


Basically, I'm trying to optimize a search involving three tables.

Table A: (content table...currently nearly 40,000 rows and 62 MB)
 id UNSIGNED INT PRIMARY
 status VARCHAR 10 INDEXED
 category VARCHAR 20 INDEXED
 content LONGTEXT
 + other fields

Table B: (stem word index...instead of indexing the exact word, I just keep
the stem, so 'car' and 'cars' are both stored as 'car'. Table currently has
about 180,000 rows and is 9 MB)
 id UNSIGNED INT PRIMARY
 stem_word VARCHAR 30 INDEXED

Table C: (full text index...currently about 4.5 million rows and 186 MB)
 id UNSIGNED INT PRIMARY
 stem_word_id (references id in table B) UNSIGNED INT INDEXED
 content_id (references id in table A) UNSIGNED INT INDEXED


Here's how I perform the search right now.

The user enters keywords. I turn those words into a list of unique stems. I
then search for the stem IDs from Table B using the following query:

SELECT id FROM B WHERE stem_word IN ('truck','piano','move');

Using the IDs from that query (say 10, 20, 30), I run the following query:

SELECT C.content_id, COUNT(C.content_id), A.category FROM A, C WHERE
C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN (10,20,30))
GROUP BY C.content_id HAVING Count(C.content_id)=3;

I have recently also tried this query, which is a little cleaner without the
count/having stuff, but it seems about the same speed-wise:

SELECT DISTINCT C0.content_id, A.category FROM A, C C0, C C1, C C2 WHERE
C0.content_id=A.id AND A.status='Active' AND C0.stem_word_id=10 AND
C1.stem_word_id=20 AND C0.content_id=C1.content_id AND C2.stem_word=30 AND
C0.content_id=C1.content_id;

When running the EXPLAIN on both queries, both are doing 'using where; using
temporary; using filesort' on table C. I'm not sure how to avoid that.

This system has 512MB and I'm basically using the my-large.cnf file as-is.
Running mytop shows that the key efficiency is 100%, and (cache?) ratio
around 36%. All my tables are MyISAM right now. I tried switching to InnoDB
but it was much slower and I figured there were enough variables to
troubleshoot already without playing around with the InnoDB parameters.

So my questions:

1. Is there anything blatantly wrong with my queries?
2. Should I have designed my index table differently?
3. Any parameter in my.cnf I should modify to be different from the
my-large.cnf settings?
4. Any web resources with instructions for building customized full text
indexing...not using built-in stuff?
5. Since the content field of table A is only used for display (since the
words have been indexed), I was considering compressing the text in that
field so save DB disk space. Is that worth the effort?

Any input is appreciated. Thanks for your help.

Mike Boone
(reply to the list or contact me directly at:
http://boonedocks.net/mailmike.php3)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need Help Upgrading From 4.x to 4.x

2003-11-19 Thread Thomas Spahni
On Wed, 19 Nov 2003, Mark Marshall wrote:

 Hi, everyone.

 I have a 4.0.4 beta install of Mysql on Red Hat 7.3.  I want to upgrade
 it to 4.0.16, and keep all the data intact.  Do I just dump the
 databases (just in case), stop the server, then ./configure, make, make
 install over top of the old server and start it up again and see what
 happens?

exactly. The dump is a good idea. Make sure that you compile with the same
options to configure as your 4.0.4 build (everything should go to the same
directory as it was before). This used to be a problem with SuSE
distributions when installing over an old prm installation, because they
used to have a different directory layout. I can't tell you how RedHat did
this.

Thomas Spahni

 Thanks,
 Mark


 As of November 1st, 2003, Brandywine Senior Care's Corporate Office new contact 
 information is:

 Brandywine Senior Care, Inc.
 525 Fellowship Road
 Suite 360
 Mt. Laurel, NJ 08054
 (856) 813-2000 Phone
 (856) 813-2020 Fax


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql error

2003-11-19 Thread Hsiu-Hui Tseng
Hi,

I keep getting the following error in mysql error file. Could anyone tell me
what's going on and how to avoid this kind of error?

031117  2:44:37  Error reading packet from server: Lost connection to MySQL
server during query (server_errno=2013)
031117  2:44:37  Slave: Failed reading log event, reconnecting to retry, log
'replication.027' position 711963009
031117  2:44:37  Slave: reconnected to master
'[EMAIL PROTECTED]:3306',replication resumed in log
'replication.027' at position 71196300

Thanks!

Hsiu-Hui


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
 At 11:03 -0500 11/19/03, Denis Mercier wrote:
 
  i also tried:
 use my_db;
 select * from my_table;
 
 so when i try shell mysql --pager  test1 | tar x
 
 the tar file does not get written to /usr/local/test1 but i still dont
 get my untared files? write permissions are ok?
 what am i missing?
   
 Have you verified that the mysql command actually writes any output, for
 example, with this command:
   
  shell mysql --pager  test1 | wc
   
 Do you get any output from tar at all, such as an error message?
 Maybe the output from mysql isn't suitable.  Couple of things to try:
   
   The output from mysql seems fine because when it does write the value
   of my blob I get a well formatted tar file,
   I'm thinking that maybe tar cannot handle a stream as it's input?
 
   That may be.  In that case, you should invoke tar as
 
   tar xf -
 
 
 I have tried tar xf but tar gives me an error, it wants a file
 specified.
 
 But I didn't say to use tar xf.  I said to use tar xf -.  That
 dash is highly significant.  It means The file to read is the
 standard input.

tried tar xf - 
mysql   test1 |  tar xf -
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Error exit delayed from previous errors

I dont get it, if i do this command:

mysql select * into dumpfile /usr/local/test1.tar from my_table

I get a tar archive written to /usr/local/

I'm using mysql client to store my tar file as a blob,
mysql insert into my_table values(LOAD_FILE(/usr/local/test1.tar));

could this function cause my problem?
Thanks 
 

 
 -- 
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com
 
 Are you MySQL certified?  http://www.mysql.com/certification/
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing Custom Full Text Index

2003-11-19 Thread Santino
Have You tryed:

SELECT A.id, COUNT(A.id), A.category FROM A, C WHERE
C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN (10,20,30))
GROUP BY A.id
Santino

At 13:08 -0500 19-11-2003, Mike Boone wrote:
Hi all,

I have used PHP and MySQL for years but have never tackled the amount of
data that I now have (and it's expected to grow considerably). My queries
run OK when the server is not busy but they just about kill the DB when
traffic picks up.
My apologies if this is common knowledge...I've had trouble searching on
custom full text indexing because it generally brings up hits regarding the
built-in full text indexing for various DB servers. MySQL's built-in
fulltext doesn't quite do what we want.
We are currently running MySQL 4.0.16 compiled with LinuxThreads on FreeBSD.

Basically, I'm trying to optimize a search involving three tables.

Table A: (content table...currently nearly 40,000 rows and 62 MB)
 id UNSIGNED INT PRIMARY
 status VARCHAR 10 INDEXED
 category VARCHAR 20 INDEXED
 content LONGTEXT
 + other fields
Table B: (stem word index...instead of indexing the exact word, I just keep
the stem, so 'car' and 'cars' are both stored as 'car'. Table currently has
about 180,000 rows and is 9 MB)
 id UNSIGNED INT PRIMARY
 stem_word VARCHAR 30 INDEXED
Table C: (full text index...currently about 4.5 million rows and 186 MB)
 id UNSIGNED INT PRIMARY
 stem_word_id (references id in table B) UNSIGNED INT INDEXED
 content_id (references id in table A) UNSIGNED INT INDEXED
Here's how I perform the search right now.

The user enters keywords. I turn those words into a list of unique stems. I
then search for the stem IDs from Table B using the following query:
SELECT id FROM B WHERE stem_word IN ('truck','piano','move');

Using the IDs from that query (say 10, 20, 30), I run the following query:

SELECT C.content_id, COUNT(C.content_id), A.category FROM A, C WHERE
C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN (10,20,30))
GROUP BY C.content_id HAVING Count(C.content_id)=3;
I have recently also tried this query, which is a little cleaner without the
count/having stuff, but it seems about the same speed-wise:
SELECT DISTINCT C0.content_id, A.category FROM A, C C0, C C1, C C2 WHERE
C0.content_id=A.id AND A.status='Active' AND C0.stem_word_id=10 AND
C1.stem_word_id=20 AND C0.content_id=C1.content_id AND C2.stem_word=30 AND
C0.content_id=C1.content_id;
When running the EXPLAIN on both queries, both are doing 'using where; using
temporary; using filesort' on table C. I'm not sure how to avoid that.
This system has 512MB and I'm basically using the my-large.cnf file as-is.
Running mytop shows that the key efficiency is 100%, and (cache?) ratio
around 36%. All my tables are MyISAM right now. I tried switching to InnoDB
but it was much slower and I figured there were enough variables to
troubleshoot already without playing around with the InnoDB parameters.
So my questions:

1. Is there anything blatantly wrong with my queries?
2. Should I have designed my index table differently?
3. Any parameter in my.cnf I should modify to be different from the
my-large.cnf settings?
4. Any web resources with instructions for building customized full text
indexing...not using built-in stuff?
5. Since the content field of table A is only used for display (since the
words have been indexed), I was considering compressing the text in that
field so save DB disk space. Is that worth the effort?
Any input is appreciated. Thanks for your help.

Mike Boone
(reply to the list or contact me directly at:
http://boonedocks.net/mailmike.php3)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: piping blob into shell command (tar)

2003-11-19 Thread Dan Greene
This may be simplistic, but is mysql putting any text before / after blob content, 
such as column name, '1 row processed OK', that may be 'corrupting' the tar data?

using a very small tar file, and run your command, piping to more instead of tar to 
see if there is any extra text that mysql is adding in when it's not explictly running 
'into dumpfile'

 -Original Message-
 From: Denis Mercier [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 19, 2003 1:55 PM
 To: [EMAIL PROTECTED]
 Subject: Re: piping blob into shell command (tar)
 
 
 On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
  At 11:03 -0500 11/19/03, Denis Mercier wrote:
  
   i also tried:
  use my_db;
  select * from my_table;
  
  so when i try shell mysql --pager  test1 | tar x
  
  the tar file does not get written to 
 /usr/local/test1 but i still dont
  get my untared files? write permissions are ok?
  what am i missing?

  Have you verified that the mysql command actually 
 writes any output, for
  example, with this command:

   shell mysql --pager  test1 | wc

  Do you get any output from tar at all, such as an 
 error message?
  Maybe the output from mysql isn't suitable.  Couple 
 of things to try:

The output from mysql seems fine because when it does 
 write the value
of my blob I get a well formatted tar file,
I'm thinking that maybe tar cannot handle a stream as 
 it's input?
  
That may be.  In that case, you should invoke tar as
  
tar xf -
  
  
  I have tried tar xf but tar gives me an error, it wants a file
  specified.
  
  But I didn't say to use tar xf.  I said to use tar xf -.  That
  dash is highly significant.  It means The file to read is the
  standard input.
 
 tried tar xf - 
 mysql   test1 |  tar xf -
 tar: This does not look like a tar archive
 tar: Skipping to next header
 tar: Error exit delayed from previous errors
 
 I dont get it, if i do this command:
 
 mysql select * into dumpfile /usr/local/test1.tar from my_table
 
 I get a tar archive written to /usr/local/
 
 I'm using mysql client to store my tar file as a blob,
 mysql insert into my_table values(LOAD_FILE(/usr/local/test1.tar));
 
 could this function cause my problem?
 Thanks 
  
 
  
  -- 
  Paul DuBois, Senior Technical Writer
  Madison, Wisconsin, USA
  MySQL AB, www.mysql.com
  
  Are you MySQL certified?  http://www.mysql.com/certification/
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: piping blob into shell command (tar)

2003-11-19 Thread Paul DuBois
At 13:55 -0500 11/19/03, Denis Mercier wrote:
On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
 At 11:03 -0500 11/19/03, Denis Mercier wrote:

  i also tried:
 use my_db;
 select * from my_table;
 
 so when i try shell mysql --pager  test1 | tar x
 
 the tar file does not get written to /usr/local/test1 but 
i still dont
 get my untared files? write permissions are ok?
 what am i missing?
   
 Have you verified that the mysql command actually writes 
any output, for
 example, with this command:
   
  shell mysql --pager  test1 | wc
   
 Do you get any output from tar at all, such as an error message?
 Maybe the output from mysql isn't suitable.  Couple of 
things to try:
   
   The output from mysql seems fine because when it does write the value
   of my blob I get a well formatted tar file,
   I'm thinking that maybe tar cannot handle a stream as it's input?
 
   That may be.  In that case, you should invoke tar as
 
   tar xf -
 
 
 I have tried tar xf but tar gives me an error, it wants a file
 specified.

 But I didn't say to use tar xf.  I said to use tar xf -.  That
 dash is highly significant.  It means The file to read is the
 standard input.
tried tar xf -
mysql   test1 |  tar xf -
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Error exit delayed from previous errors
I dont get it, if i do this command:

mysql select * into dumpfile /usr/local/test1.tar from my_table

I get a tar archive written to /usr/local/

I'm using mysql client to store my tar file as a blob,
mysql insert into my_table values(LOAD_FILE(/usr/local/test1.tar));
could this function cause my problem?
No.

The output when you retrieve the column and send it into the pipe
contains something that is not part of the column contents itself.
Likely the column header.  Try using the --skip-column-names option
to suppress the header.
--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
Are you MySQL certified?  http://www.mysql.com/certification/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: error 13

2003-11-19 Thread Yves Goergen
I just received another error message I don't know what to do with:

 Got error 127 from table handler

I tried to alter a table's structure. A restart of MySQL did help. A table
CHECK showed no errors with that table.

-- 
Yves Goergen
[EMAIL PROTECTED]
Please don't CC me (causes double mails)


On Tuesday, November 18, 2003 10:54 PM CET, Yves Goergen wrote:
 Hi there,
 I'm running MySQL 4.0.16-nt (previously 3.23.5x, same system, same
 problem) on Windows 2000 Pro SP3 and almost constantly run against
 the following error (or similar) when I try to alter something about
 the structure of a table (MyISAM database):

 Error on rename of '.\BlackBoard\uni_Forums.MYI' to
 '.\BlackBoard\#sql2-350-c.MYI' (Errcode: 13)

 (...)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need Help Upgrading From 4.x to 4.x

2003-11-19 Thread William Fong
You could either use mysqldump or just copy the data directory to a safe
place.

Also, if you do not need any special build flags, you should use the
official MySQL binaries, either the RPM or tarball. They optimized the
binaries.

-will


- Original Message - 
From: Thomas Spahni [EMAIL PROTECTED]
To: Mark Marshall [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 10:18 AM
Subject: Re: Need Help Upgrading From 4.x to 4.x


 On Wed, 19 Nov 2003, Mark Marshall wrote:

  Hi, everyone.
 
  I have a 4.0.4 beta install of Mysql on Red Hat 7.3.  I want to upgrade
  it to 4.0.16, and keep all the data intact.  Do I just dump the
  databases (just in case), stop the server, then ./configure, make, make
  install over top of the old server and start it up again and see what
  happens?

 exactly. The dump is a good idea. Make sure that you compile with the same
 options to configure as your 4.0.4 build (everything should go to the same
 directory as it was before). This used to be a problem with SuSE
 distributions when installing over an old prm installation, because they
 used to have a different directory layout. I can't tell you how RedHat did
 this.

 Thomas Spahni

  Thanks,
  Mark
 
 
  As of November 1st, 2003, Brandywine Senior Care's Corporate Office new
contact information is:
 
  Brandywine Senior Care, Inc.
  525 Fellowship Road
  Suite 360
  Mt. Laurel, NJ 08054
  (856) 813-2000 Phone
  (856) 813-2020 Fax


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can't start server

2003-11-19 Thread aman raheja
Hello

I would firstly like to say that you are sending emails to me personally 
by hitting Reply - instead use Reply All next time so that others can 
view the problem too. I myself am *NOT* a guru in the field and so 
expect better advice from an expert.

Anyway.

Firstly - the error seems like when you are trying to connect you get 
this - have you started the server???
try with chkconfig and start if it's not ON

/etc/rc.d/init.d/mysqld start

Check and let know so we can go on.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Thank you again for your help.
 
I installed the 4.0 version of Mysql on a red hat 8.0 OS. 
I am trying to use the database to create a forum (with PHP) for 
discussion 
It is a production server but with no critical status
I was able to start Mysql a couple months ago but somehow it went down 
and since then I am
constantly getting the error message of : error 2002: can not connect 
to server through socket /var/lib/mysql/mysql.sock
when I try to start it.
I am not expert but I can do some things around the server if I am 
pointed to the right direction.
 
Thanks
 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 




Re: error 13

2003-11-19 Thread William Fong
Check this out:
http://www.mysql.com/doc/en/Repair.html

-will


- Original Message - 
From: Yves Goergen [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 11:12 AM
Subject: Re: error 13


 I just received another error message I don't know what to do with:

  Got error 127 from table handler

 I tried to alter a table's structure. A restart of MySQL did help. A table
 CHECK showed no errors with that table.

 -- 
 Yves Goergen
 [EMAIL PROTECTED]
 Please don't CC me (causes double mails)


 On Tuesday, November 18, 2003 10:54 PM CET, Yves Goergen wrote:
  Hi there,
  I'm running MySQL 4.0.16-nt (previously 3.23.5x, same system, same
  problem) on Windows 2000 Pro SP3 and almost constantly run against
  the following error (or similar) when I try to alter something about
  the structure of a table (MyISAM database):
 
  Error on rename of '.\BlackBoard\uni_Forums.MYI' to
  '.\BlackBoard\#sql2-350-c.MYI' (Errcode: 13)
 
  (...)


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can't start server

2003-11-19 Thread aman raheja
Do try ps -ef | grep mysqld
to check the server is currently running.
If not then do
/etc/init.d/mysqld start
Let know if there's an error at this point. Also show your my.cnf

Did you use rpm or source to install mysql?

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Thank you again for your help.
 
I installed the 4.0 version of Mysql on a red hat 8.0 OS. 
I am trying to use the database to create a forum (with PHP) for 
discussion 
It is a production server but with no critical status
I was able to start Mysql a couple months ago but somehow it went down 
and since then I am
constantly getting the error message of : error 2002: can not connect 
to server through socket /var/lib/mysql/mysql.sock
when I try to start it.
I am not expert but I can do some things around the server if I am 
pointed to the right direction.
 
Thanks
 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
 

-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 12:28 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: can't start server
Is this a production server?
THIS IS not a good advice but If not then just step down to a
lower init level turning off the network services and then come
back into level 3/5 whatever you are using, depending on platform.
Then try to start the server. If you would tell what platform you
are running on and what version of mysql you are using, will help
you get the solution faster - not the restarting the services kind :)
Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Aman, thank you for your reply.

I did check the ports with netstat and it appears that no other
application is using port 3306.
I am tempting to uninstall mysql and then reinstall it again but
I do not know how to uninstall it the easy way.
Thanks again

[EMAIL PROTECTED]

-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 9:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: can't start server
did you try netstat and see if something else is using the port.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

 

I am sure that this problem has already been answered but some how I can't
figure it out.
When I try to start the mysqld I get the error 2002 can't start server:
   

Bind
 

or TCP/IP port: Address already in use or...

I check that the damoen is not runnig (ps -ef |grep mysqld ) and shows
nothing running. But I just can start it.
Someone there can help me with easy step to fix this?

Thanks



   

--
Your favorite stores, helpful shopping tools and great gift ideas.
Experience the convenience of buying online with [EMAIL PROTECTED]
http://shopnow.netscape.com/
 

-- 
Your favorite stores, helpful shopping tools and great gift ideas.
Experience the convenience of buying online with [EMAIL PROTECTED]
http://shopnow.netscape.com/

--
Your favorite stores, helpful shopping tools and great gift ideas. 
Experience the convenience of buying online with [EMAIL PROTECTED] 
http://shopnow.netscape.com/



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Need Help Upgrading From 4.x to 4.x

2003-11-19 Thread aman raheja
What if one is using rpm - should just use the --upgrade option
rpm -U MySQL-server-4.x
Is this ok to do?
Thanks

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

On Wed, 19 Nov 2003, Mark Marshall wrote:

 

Hi, everyone.

I have a 4.0.4 beta install of Mysql on Red Hat 7.3.  I want to upgrade
it to 4.0.16, and keep all the data intact.  Do I just dump the
databases (just in case), stop the server, then ./configure, make, make
install over top of the old server and start it up again and see what
happens?
   

exactly. The dump is a good idea. Make sure that you compile with the same
options to configure as your 4.0.4 build (everything should go to the same
directory as it was before). This used to be a problem with SuSE
distributions when installing over an old prm installation, because they
used to have a different directory layout. I can't tell you how RedHat did
this.
Thomas Spahni

 

Thanks,
Mark
As of November 1st, 2003, Brandywine Senior Care's Corporate Office new contact information is:

Brandywine Senior Care, Inc.
525 Fellowship Road
Suite 360
Mt. Laurel, NJ 08054
(856) 813-2000 Phone
(856) 813-2020 Fax
   



 

--
Your favorite stores, helpful shopping tools and great gift ideas. 
Experience the convenience of buying online with [EMAIL PROTECTED] 
http://shopnow.netscape.com/



Re: MySQL 4.0.16 on RHEL3 AS IA64

2003-11-19 Thread Owen Scott Medd
We were unsuccessful getting the MySQL AMD64 rpms to work on RHEL3 AS.  
I downloaded the source rpm and rebuilt the rpms on our Opteron server 
and they have mostly worked, only one crash so far which we are in the 
process of reporting to the MySQL support group.

Of course, we are experiencing some serious sadness with the RANGE vs 
REF optimization bug,  but I am hopeful that when 4.0.17 comes out all 
will be well again.

Tomek Dudziak wrote:

Hi,

Did anyone get the 4.0.16 binary to work on RedHat 3 Enterprise for IA64?
I read that the same happens on RH3AS for AMD64.
This was thoroughly tested with our production team and it works truly
just fine.  according to MySQL support :-\
031102 22:07:17  mysqld started
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong
and this may fail.

key_buffer_size=8388600
read_buffer_size=131072
max_used_connections=0
max_connections=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 225791 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

031102 22:07:17  mysqld ended

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
 At 13:55 -0500 11/19/03, Denis Mercier wrote:
 On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
   At 11:03 -0500 11/19/03, Denis Mercier wrote:
 
i also tried:
   use my_db;
   select * from my_table;
   
   so when i try shell mysql --pager  test1 | tar x
   
   the tar file does not get written to /usr/local/test1 but 
 i still dont
   get my untared files? write permissions are ok?
   what am i missing?
 
   Have you verified that the mysql command actually writes 
 any output, for
   example, with this command:
 
shell mysql --pager  test1 | wc
 
   Do you get any output from tar at all, such as an error message?
   Maybe the output from mysql isn't suitable.  Couple of 
 things to try:
 
 The output from mysql seems fine because when it does write the value
 of my blob I get a well formatted tar file,
 I'm thinking that maybe tar cannot handle a stream as it's input?
   
 That may be.  In that case, you should invoke tar as
   
 tar xf -
   
   
   I have tried tar xf but tar gives me an error, it wants a file
   specified.
 
   But I didn't say to use tar xf.  I said to use tar xf -.  That
   dash is highly significant.  It means The file to read is the
   standard input.
 
 tried tar xf -
 mysql   test1 |  tar xf -
 tar: This does not look like a tar archive
 tar: Skipping to next header
 tar: Error exit delayed from previous errors
 
 I dont get it, if i do this command:
 
 mysql select * into dumpfile /usr/local/test1.tar from my_table
 
 I get a tar archive written to /usr/local/
 
 I'm using mysql client to store my tar file as a blob,
 mysql insert into my_table values(LOAD_FILE(/usr/local/test1.tar));
 
 could this function cause my problem?
 
 No.
 
 The output when you retrieve the column and send it into the pipe
 contains something that is not part of the column contents itself.
 Likely the column header.  Try using the --skip-column-names option
 to suppress the header.
 
I tarred test1 recreated my_table, using LOAD_FILE put in test1.tar
30 B file before tarred, after tarred 10 K 

mysql --skip-column-names   test1 | tar xf -
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Error exit delayed from previous errors


mysql --skip-column-names   test1 | more

./test1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\00100644\\\0036\007756740530\0011034\0 
0\0\0\0\0\0\0\0\
0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
0\0\0ustar 
\0root\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0root\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0use test;\nselect *
from test;\n\0\0\0\ lot more but all \0's

I dont know if tarring adds all this extra data? but tarring did
increase the file size from 30 B to 10 K,
I used vi to create test1,

 

 -- 
 Paul DuBois, Senior Technical Writer
 Madison, Wisconsin, USA
 MySQL AB, www.mysql.com
 
 Are you MySQL certified?  http://www.mysql.com/certification/
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Deleting column data LOAD question

2003-11-19 Thread Patrick Larkin
Hello -

what would the syntax be to delete all the data in a column for every 
record but still maintain the column?  In other words, I don't want to 
delete the column, just the data in it.

Second, is there a way I can LOAD a file of plain text passwords into a 
mysql database and encrypt them during the LOAD process?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Wed, 2003-11-19 at 14:03, Dan Greene wrote:
 This may be simplistic, but is mysql putting any text before / after blob content, 
 such as column name, '1 row processed OK', that may be 'corrupting' the tar data?
 
 using a very small tar file, and run your command, piping to more instead of tar to 
 see if there is any extra text that mysql is adding in when it's not explictly 
 running 'into dumpfile'

I tried your suggestion,result in other post,
I took your idea further, instead of a tar file I stored
my test1 file as is, using FILE_LOAD().

before:
use test;
select * from test;
after:
mysql --skip-column-names   test1 | more
use test;\nselect * from test;\n

\n's are added?


  -Original Message-
  From: Denis Mercier [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 19, 2003 1:55 PM
  To: [EMAIL PROTECTED]
  Subject: Re: piping blob into shell command (tar)
  
  
  On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
   At 11:03 -0500 11/19/03, Denis Mercier wrote:
   
i also tried:
   use my_db;
   select * from my_table;
   
   so when i try shell mysql --pager  test1 | tar x
   
   the tar file does not get written to 
  /usr/local/test1 but i still dont
   get my untared files? write permissions are ok?
   what am i missing?
 
   Have you verified that the mysql command actually 
  writes any output, for
   example, with this command:
 
shell mysql --pager  test1 | wc
 
   Do you get any output from tar at all, such as an 
  error message?
   Maybe the output from mysql isn't suitable.  Couple 
  of things to try:
 
 The output from mysql seems fine because when it does 
  write the value
 of my blob I get a well formatted tar file,
 I'm thinking that maybe tar cannot handle a stream as 
  it's input?
   
 That may be.  In that case, you should invoke tar as
   
 tar xf -
   
   
   I have tried tar xf but tar gives me an error, it wants a file
   specified.
   
   But I didn't say to use tar xf.  I said to use tar xf -.  That
   dash is highly significant.  It means The file to read is the
   standard input.
  
  tried tar xf - 
  mysql   test1 |  tar xf -
  tar: This does not look like a tar archive
  tar: Skipping to next header
  tar: Error exit delayed from previous errors
  
  I dont get it, if i do this command:
  
  mysql select * into dumpfile /usr/local/test1.tar from my_table
  
  I get a tar archive written to /usr/local/
  
  I'm using mysql client to store my tar file as a blob,
  mysql insert into my_table values(LOAD_FILE(/usr/local/test1.tar));
  
  could this function cause my problem?
  Thanks 
   
  
   
   -- 
   Paul DuBois, Senior Technical Writer
   Madison, Wisconsin, USA
   MySQL AB, www.mysql.com
   
   Are you MySQL certified?  http://www.mysql.com/certification/
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Status shows wrong version number for server after upgrade

2003-11-19 Thread Mark Marshall
I just compiled 4.0.16 and installed it over top of 4.0.4 beta.
 
After stopping and restarting mysqld_safe, I went into mysql and issued
a status command.  I got back the following:
 
mysql status
--
mysql  Ver 12.22 Distrib 4.0.16, for pc-linux (i686)
 
Connection id:  35
Current database:
Current user:   [EMAIL PROTECTED]
SSL:Not in use
Current pager:  stdout
Using outfile:  ''
Server version:  4.0.4-beta-log
Protocol version:   10
Connection: Localhost via UNIX socket
Client characterset:latin1
Server characterset:latin1
UNIX socket:/tmp/mysql.sock
Uptime: 40 min 17 sec
 
Threads: 11  Questions: 1923  Slow queries: 0  Opens: 22  Flush tables:
1  Open
tables: 16  Queries per second avg: 0.796
--
 
What gives?  Is there something else I need to do to update the
server's version?  Everything I see looks like I'm running the
executable that I just compiled this afternoon.
 
Thanks,
Mark


As of November 1st, 2003, Brandywine Senior Care's Corporate Office new contact 
information is:

Brandywine Senior Care, Inc.
525 Fellowship Road
Suite 360
Mt. Laurel, NJ 08054
(856) 813-2000 Phone
(856) 813-2020 Fax

**
This e-mail and any of its attachments may contain Brandywine Senior Care, Inc. 
proprietary information, which is privileged, confidential, or subject to copyright 
belonging to Brandywine Senior Care, Inc. 
This e-mail is intended solely for the use of the individual or entity to which it is 
addressed.  If you are not the intended recipient of this e-mail, you are hereby 
notified that any dissemination, distribution, copying, or action taken in relation to 
the contents of and attachments to this e-mail is strictly prohibited and may be 
unlawful.  If you have received this e-mail in error, please notify the sender 
immediately and permanently delete the original and any copy of this e-mail and any 
printout. Thank You.
**



RE: piping blob into shell command (tar)

2003-11-19 Thread Dan Greene
ok... try this:

 mysql --skip-column-names   test1  mytestoutput.tar

tar xvf mytestoutput.tar

and if it works, try 

cat mytestoutput.tar | tar xf -

to see if it works 


 -Original Message-
 From: Denis Mercier [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 19, 2003 2:41 PM
 To: [EMAIL PROTECTED]
 Subject: Re: piping blob into shell command (tar)
 
 
 On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
  At 13:55 -0500 11/19/03, Denis Mercier wrote:
  On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
At 11:03 -0500 11/19/03, Denis Mercier wrote:
  
 i also tried:
use my_db;
select * from my_table;

so when i try shell mysql --pager  test1 | tar x

the tar file does not get written to 
 /usr/local/test1 but 
  i still dont
get my untared files? write permissions are ok?
what am i missing?
  
Have you verified that the mysql command 
 actually writes 
  any output, for
example, with this command:
  
 shell mysql --pager  test1 | wc
  
Do you get any output from tar at all, such as 
 an error message?
Maybe the output from mysql isn't suitable.  Couple of 
  things to try:
  
  The output from mysql seems fine because when it 
 does write the value
  of my blob I get a well formatted tar file,
  I'm thinking that maybe tar cannot handle a stream 
 as it's input?

  That may be.  In that case, you should invoke tar as

  tar xf -


I have tried tar xf but tar gives me an error, it wants a file
specified.
  
But I didn't say to use tar xf.  I said to use tar 
 xf -.  That
dash is highly significant.  It means The file to read is the
standard input.
  
  tried tar xf -
  mysql   test1 |  tar xf -
  tar: This does not look like a tar archive
  tar: Skipping to next header
  tar: Error exit delayed from previous errors
  
  I dont get it, if i do this command:
  
  mysql select * into dumpfile /usr/local/test1.tar from my_table
  
  I get a tar archive written to /usr/local/
  
  I'm using mysql client to store my tar file as a blob,
  mysql insert into my_table 
 values(LOAD_FILE(/usr/local/test1.tar));
  
  could this function cause my problem?
  
  No.
  
  The output when you retrieve the column and send it into the pipe
  contains something that is not part of the column contents itself.
  Likely the column header.  Try using the --skip-column-names option
  to suppress the header.
  
 I tarred test1 recreated my_table, using LOAD_FILE put in test1.tar
 30 B file before tarred, after tarred 10 K 
 
 mysql --skip-column-names   test1 | tar xf -
 tar: This does not look like a tar archive
 tar: Skipping to next header
 tar: Error exit delayed from previous errors
 
 
 mysql --skip-column-names   test1 | more
 
 ./test1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\00100644\\\0036\00775674053
 0\0011034\0 0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0ustar 
 \0root\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 root\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0use test;\nselect *
 from test;\n\0\0\0\ lot more but all \0's
 
 I dont know if tarring adds all this extra data? but tarring did
 increase the file size from 30 B to 10 K,
 I used vi to create test1,
 
  
 
  -- 
  Paul DuBois, Senior Technical Writer
  Madison, Wisconsin, USA
  MySQL AB, www.mysql.com
  
  Are you MySQL certified?  http://www.mysql.com/certification/
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Wrong bytesec: 0-0-0 at linkstart: 292251752

2003-11-19 Thread Jon Wagoner
I have gotten several of these errors over the last couple of months.  This
has happened on two different servers.  (Both are running MySQL 4.0.16 on
Red Hat 7.3)

It mainly happens in one particular table, but has happened in another one.
I have tried dropping and recreating the table, but that doesn't seem to
help.  This table has about 75,000 - 100,000 records inserted per day.  They
are never deleted, but I daily delete records older than 2 weeks.

Someone has suggested that it might be bad memory, but since it happens on
both servers, and on the same table in each case, it seems like there is
something more.

Jon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: row-level locking question...

2003-11-19 Thread Heikki Tuuri
Andre,

- Original Message - 
From: Andre Charbonneau [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Monday, November 17, 2003 5:11 PM
Subject: row-level locking question...


 Hi,

 Let say that I have the following transaction:

 1. Read value v1 from table t1.
 2. Do some computation using v1.
 3. Update value v2 from table t2.

 If in the above I don't want any other concurrent transaction to read v2
 until I'm done updating it, how should I put an exclusive lock on it?

 Using InnoDB, would the following be the way to do it (in transaction
 mode, seriliazable isolation level)?

 SELECT v2 from t2 FOR UPDATE; // (Do this to prevent others from reading
v2)

 SELECT v1 from t1;

 (do the computation)

 UPDATE t2 set v2=new value;

 COMMIT;


 In the above statements, I first read the value v2 to put an exclusive
 lock on that row.  But I don't really need the value of v2, I just need
 to lock it down.

note that

UPDATE t2 set v2=new value;

automatically sets an x-lock on the row to update. If the above is the whole
story about your application logic, you really do not need to do

SELECT v2 from t2 FOR UPDATE;

first. But, to get serializable execution, you NEED to do a locking read

SELECT v1 from t1 LOCK IN SHARE MODE;

to freeze t1 so that v1 cannot change meanwhile!

---

To sum up, the following program does serializable execution:

BEGIN;

SELECT v1 from t1 LOCK IN SHARE MODE;

(do the computation of v2 based on v1)

UPDATE t2 set v2=new value;

COMMIT;

 Is the above approach the way to go or is there a more
 elegant/correct way of doing this?

 Thanks.
 -- 
 Andre Charbonneau

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - hot backup tool for InnoDB which also backs up MyISAM
tables


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: piping blob into shell command (tar)

2003-11-19 Thread Dan Greene
one more idea:

try:
mysql --skip-column-names --raw   test1 | tar xf -


 -Original Message-
 From: Denis Mercier [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, November 19, 2003 2:41 PM
 To: [EMAIL PROTECTED]
 Subject: Re: piping blob into shell command (tar)
 
 
 On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
  At 13:55 -0500 11/19/03, Denis Mercier wrote:
  On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
At 11:03 -0500 11/19/03, Denis Mercier wrote:
  
 i also tried:
use my_db;
select * from my_table;

so when i try shell mysql --pager  test1 | tar x

the tar file does not get written to 
 /usr/local/test1 but 
  i still dont
get my untared files? write permissions are ok?
what am i missing?
  
Have you verified that the mysql command 
 actually writes 
  any output, for
example, with this command:
  
 shell mysql --pager  test1 | wc
  
Do you get any output from tar at all, such as 
 an error message?
Maybe the output from mysql isn't suitable.  Couple of 
  things to try:
  
  The output from mysql seems fine because when it 
 does write the value
  of my blob I get a well formatted tar file,
  I'm thinking that maybe tar cannot handle a stream 
 as it's input?

  That may be.  In that case, you should invoke tar as

  tar xf -


I have tried tar xf but tar gives me an error, it wants a file
specified.
  
But I didn't say to use tar xf.  I said to use tar 
 xf -.  That
dash is highly significant.  It means The file to read is the
standard input.
  
  tried tar xf -
  mysql   test1 |  tar xf -
  tar: This does not look like a tar archive
  tar: Skipping to next header
  tar: Error exit delayed from previous errors
  
  I dont get it, if i do this command:
  
  mysql select * into dumpfile /usr/local/test1.tar from my_table
  
  I get a tar archive written to /usr/local/
  
  I'm using mysql client to store my tar file as a blob,
  mysql insert into my_table 
 values(LOAD_FILE(/usr/local/test1.tar));
  
  could this function cause my problem?
  
  No.
  
  The output when you retrieve the column and send it into the pipe
  contains something that is not part of the column contents itself.
  Likely the column header.  Try using the --skip-column-names option
  to suppress the header.
  
 I tarred test1 recreated my_table, using LOAD_FILE put in test1.tar
 30 B file before tarred, after tarred 10 K 
 
 mysql --skip-column-names   test1 | tar xf -
 tar: This does not look like a tar archive
 tar: Skipping to next header
 tar: Error exit delayed from previous errors
 
 
 mysql --skip-column-names   test1 | more
 
 ./test1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\00100644\\\0036\00775674053
 0\0011034\0 0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0\0\0\0\0\0\0\0\0\0\0\0\
 0\0\0ustar 
 \0root\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 root\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0
 \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0use test;\nselect *
 from test;\n\0\0\0\ lot more but all \0's
 
 I dont know if tarring adds all this extra data? but tarring did
 increase the file size from 30 B to 10 K,
 I used vi to create test1,
 
  
 
  -- 
  Paul DuBois, Senior Technical Writer
  Madison, Wisconsin, USA
  MySQL AB, www.mysql.com
  
  Are you MySQL certified?  http://www.mysql.com/certification/
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Wed, 2003-11-19 at 15:00, Dan Greene wrote:
 ok... try this:
 
  mysql --skip-column-names   test1  mytestoutput.tar

the content does get redirected into mytestoutput.tar

 
 tar xvf mytestoutput.tar
tar: This does not look like a tar archive
tar: Skipping to next header
tar: Error exit delayed from previous errors

there must be more than the column header being added?
 
 and if it works, try 
 
 cat mytestoutput.tar | tar xf -

tar: This does not look like a tar archive
tar: Skipping to next header
tar: Error exit delayed from previous errors
 





 
 to see if it works 
 
 
  -Original Message-
  From: Denis Mercier [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 19, 2003 2:41 PM
  To: [EMAIL PROTECTED]
  Subject: Re: piping blob into shell command (tar)
  
  
  On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
   At 13:55 -0500 11/19/03, Denis Mercier wrote:
   On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
 At 11:03 -0500 11/19/03, Denis Mercier wrote:
   
  i also tried:
 use my_db;
 select * from my_table;
 
 so when i try shell mysql --pager  test1 | tar x
 
 the tar file does not get written to 
  /usr/local/test1 but 
   i still dont
 get my untared files? write permissions are ok?
 what am i missing?
   
 Have you verified that the mysql command 
  actually writes 
   any output, for
 example, with this command:
   
  shell mysql --pager  test1 | wc
   
 Do you get any output from tar at all, such as 
  an error message?
 Maybe the output from mysql isn't suitable.  Couple of 
   things to try:
   
   The output from mysql seems fine because when it 
  does write the value
   of my blob I get a well formatted tar file,
   I'm thinking that maybe tar cannot handle a stream 
  as it's input?
 
   That may be.  In that case, you should invoke tar as
 
   tar xf -
 
 
 I have tried tar xf but tar gives me an error, it wants a file
 specified.
   
 But I didn't say to use tar xf.  I said to use tar 
  xf -.  That
 dash is highly significant.  It means The file to read is the
 standard input.
   
   tried tar xf -
   mysql   test1 |  tar xf -
   tar: This does not look like a tar archive
   tar: Skipping to next header
   tar: Error exit delayed from previous errors
   
   I dont get it, if i do this command:
   
   mysql select * into dumpfile /usr/local/test1.tar from my_table
   
   I get a tar archive written to /usr/local/
   
   I'm using mysql client to store my tar file as a blob,
   mysql insert into my_table 
  values(LOAD_FILE(/usr/local/test1.tar));
   
   could this function cause my problem?
   
   No.
   
   The output when you retrieve the column and send it into the pipe
   contains something that is not part of the column contents itself.
   Likely the column header.  Try using the --skip-column-names option
   to suppress the header.
   
  I tarred test1 recreated my_table, using LOAD_FILE put in test1.tar
  30 B file before tarred, after tarred 10 K 
  
  mysql --skip-column-names   test1 | tar xf -
  tar: This does not look like a tar archive
  tar: Skipping to next header
  tar: Error exit delayed from previous errors
  
  
  mysql --skip-column-names   test1 | more
  
  ./test1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\00100644\\\0036\00775674053
  0\0011034\0 0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0ustar 
  \0root\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  root\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0use test;\nselect *
  from test;\n\0\0\0\ lot more but all \0's
  
  I dont know if tarring adds all this extra data? but tarring did
  increase the file size from 30 B to 10 K,
  I used vi to create test1,
  
   
  
   -- 
   Paul DuBois, Senior Technical Writer
   Madison, Wisconsin, USA
   MySQL AB, www.mysql.com
   
   Are you MySQL certified?  http://www.mysql.com/certification/
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 


-- 
MySQL General Mailing List
For list archives: 

Re: error 13

2003-11-19 Thread Yves Goergen
well, thanks for that link, but I don't know what it wants to tell me.

13 = permission denied -- that's absolutely impossible, I have set no
permission restictions on anything concerning this. any why should a server
restart resolve a perm denied error??

127 = record file crashed -- also, REPAIR/CHECK/ANALYZE - none of them
showed me any errors of that table. and why should a server restart help
with this one?? btw, could this one cause/indicate any loss of data? at
least I couldn't see any...

so what the hell is going on with MySQL's random error message generator? is
it really so unstable on Windows after some time?

-- 
Yves Goergen
[EMAIL PROTECTED]
Please don't CC me (causes double mails)


- Original Message - 
From: William Fong [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 8:24 PM
Subject: Re: error 13


 Check this out:
 http://www.mysql.com/doc/en/Repair.html

 -will


 - Original Message - 
 From: Yves Goergen [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Wednesday, November 19, 2003 11:12 AM
 Subject: Re: error 13


  I just received another error message I don't know what to do with:
 
   Got error 127 from table handler
 
  I tried to alter a table's structure. A restart of MySQL did help. A
table
  CHECK showed no errors with that table.
 
  -- 
  Yves Goergen
  [EMAIL PROTECTED]
  Please don't CC me (causes double mails)
 
 
  On Tuesday, November 18, 2003 10:54 PM CET, Yves Goergen wrote:
   Hi there,
   I'm running MySQL 4.0.16-nt (previously 3.23.5x, same system, same
   problem) on Windows 2000 Pro SP3 and almost constantly run against
   the following error (or similar) when I try to alter something about
   the structure of a table (MyISAM database):
  
   Error on rename of '.\BlackBoard\uni_Forums.MYI' to
   '.\BlackBoard\#sql2-350-c.MYI' (Errcode: 13)
  
   (...)
 
 
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 



 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: piping blob into shell command (tar)

2003-11-19 Thread Denis Mercier
On Wed, 2003-11-19 at 15:08, Dan Greene wrote:
 one more idea:
 
 try:
 mysql --skip-column-names --raw   test1 | tar xf -
 no output,

mysql --skip-column-names --raw   test1 | more
./test1

 
 
  -Original Message-
  From: Denis Mercier [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 19, 2003 2:41 PM
  To: [EMAIL PROTECTED]
  Subject: Re: piping blob into shell command (tar)
  
  
  On Wed, 2003-11-19 at 14:02, Paul DuBois wrote:
   At 13:55 -0500 11/19/03, Denis Mercier wrote:
   On Wed, 2003-11-19 at 12:26, Paul DuBois wrote:
 At 11:03 -0500 11/19/03, Denis Mercier wrote:
   
  i also tried:
 use my_db;
 select * from my_table;
 
 so when i try shell mysql --pager  test1 | tar x
 
 the tar file does not get written to 
  /usr/local/test1 but 
   i still dont
 get my untared files? write permissions are ok?
 what am i missing?
   
 Have you verified that the mysql command 
  actually writes 
   any output, for
 example, with this command:
   
  shell mysql --pager  test1 | wc
   
 Do you get any output from tar at all, such as 
  an error message?
 Maybe the output from mysql isn't suitable.  Couple of 
   things to try:
   
   The output from mysql seems fine because when it 
  does write the value
   of my blob I get a well formatted tar file,
   I'm thinking that maybe tar cannot handle a stream 
  as it's input?
 
   That may be.  In that case, you should invoke tar as
 
   tar xf -
 
 
 I have tried tar xf but tar gives me an error, it wants a file
 specified.
   
 But I didn't say to use tar xf.  I said to use tar 
  xf -.  That
 dash is highly significant.  It means The file to read is the
 standard input.
   
   tried tar xf -
   mysql   test1 |  tar xf -
   tar: This does not look like a tar archive
   tar: Skipping to next header
   tar: Error exit delayed from previous errors
   
   I dont get it, if i do this command:
   
   mysql select * into dumpfile /usr/local/test1.tar from my_table
   
   I get a tar archive written to /usr/local/
   
   I'm using mysql client to store my tar file as a blob,
   mysql insert into my_table 
  values(LOAD_FILE(/usr/local/test1.tar));
   
   could this function cause my problem?
   
   No.
   
   The output when you retrieve the column and send it into the pipe
   contains something that is not part of the column contents itself.
   Likely the column header.  Try using the --skip-column-names option
   to suppress the header.
   
  I tarred test1 recreated my_table, using LOAD_FILE put in test1.tar
  30 B file before tarred, after tarred 10 K 
  
  mysql --skip-column-names   test1 | tar xf -
  tar: This does not look like a tar archive
  tar: Skipping to next header
  tar: Error exit delayed from previous errors
  
  
  mysql --skip-column-names   test1 | more
  
  ./test1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\00100644\\\0036\00775674053
  0\0011034\0 0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0\0\0\0\0\0\0\0\0\0\0\0\
  0\0\0ustar 
  \0root\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  root\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0
  \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0use test;\nselect *
  from test;\n\0\0\0\ lot more but all \0's
  
  I dont know if tarring adds all this extra data? but tarring did
  increase the file size from 30 B to 10 K,
  I used vi to create test1,
  
   
  
   -- 
   Paul DuBois, Senior Technical Writer
   Madison, Wisconsin, USA
   MySQL AB, www.mysql.com
   
   Are you MySQL certified?  http://www.mysql.com/certification/
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Optimizing Custom Full Text Index

2003-11-19 Thread Mike Boone
Hello Santino,

I tried to formulate my query as you suggested. According to EXPLAIN, MySQL
seems to process it the same way as my first query, working first on table C
and using where, temporary, and filesort.

Thanks,
Mike.

-Original Message-
From: Santino [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 2:01 PM
To: Mike Boone; [EMAIL PROTECTED]
Subject: Re: Optimizing Custom Full Text Index


Have You tryed:

SELECT A.id, COUNT(A.id), A.category FROM A, C WHERE
C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN (10,20,30))
GROUP BY A.id

Santino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



myisamchk Error 22 WinServer 2003 Large table

2003-11-19 Thread Jim Gallagher
Hello,

I am following the advice of the manual when bulk loading a large table by turning off 
index builds before LOAD DATA INFILE.  The load ends normally (73 GB).  The mysamchk 
build of the indices fails:

myisamchk: warning: Can't change size of indexfile, error: 22
myisamchk: error: 22 for record at pos 121201294124
myISAM-table 'mytable' is not fixed because of errors

This process works successfully on small tables (37 GB) on another 2003 machine.

What am I doing wrong?  I'm new to MySQL, and am just finding my way.  The command I'm 
issuing is:

myisamchk -O sort_buffer_size=512M -O key_buffer_size=768M -O read_buffer_size=512M -O 
write_buffer_size=512M -rq d:\mysql\data\mydb\mytable

I have 4GB of memory on the machine.

I searched for this problem with Google and found several others with the same 
problem, but didn't find a solution.

Thanks,

Jim Gallagher



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Deleting column data LOAD question

2003-11-19 Thread William Fong
 what would the syntax be to delete all the data in a column for every
 record but still maintain the column?  In other words, I don't want to
 delete the column, just the data in it.

I would use an UPDATE and set it to '' or NULL, depending on your column.

 Second, is there a way I can LOAD a file of plain text passwords into a
 mysql database and encrypt them during the LOAD process?

I don't think you can run a function during LOAD DATA. But why not just do
another UPDATE after it has been imported?  Use a WRITE lock if you are
worried about security.

-will

Real-time Chat: irc.freenode.net - #mysql
( http://www.mysql.com/doc/en/IRC.html )

- Original Message - 
From: Patrick Larkin [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 11:47 AM
Subject: Deleting column data  LOAD question


 Hello -

 what would the syntax be to delete all the data in a column for every
 record but still maintain the column?  In other words, I don't want to
 delete the column, just the data in it.

 Second, is there a way I can LOAD a file of plain text passwords into a
 mysql database and encrypt them during the LOAD process?


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql server time setting.

2003-11-19 Thread Jeff McKeon
I have two servers replicating.  Server one to server two.

If I do a show variables on server one it show the timezone as EST, on
server two it shows it as GMT.  Where is this set and how can I change
server one to GMT?

The linux box itself, that mysql server one runs on is set for GMT

Thanks,

Jeff

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: OT: MySQL NAT

2003-11-19 Thread adburne






Right,I've a linux box running apache and another running mysql both on my private lan andI want to connect frominternetthrough a firewall, to do that I'm using nat and forward with iptables.

First I try setup it for apache (just for test) and things go right, all it's ok; but with mysql I can't connect. MySQL client don't return any error,just stay waitiing..

If I don't make nat, mysql client return: ERROR 2003: Can't connect to MySQL server on xxx.xxx.xxx.xxx (111). Obviously =)

If someone can help, thanks.

Alejandro


---Mensaje original---


De: [EMAIL PROTECTED]
Fecha: miércoles 19 de noviembre de 2003 00:00:16
A: [EMAIL PROTECTED]
Asunto: Re: OT: MySQL  NAT

What exactly do you mean, you want to forward a port to a mysq server in
your lan?

 Someone can make nat with mysql? I can do it with apache without
 problems, but mysql client freeze trying to connect.

 Thanx, Alejandro.


.







_ IncrediMail - El E-mail ha evolucionado finalmente - Haga clic aquí

How to do safe monetary calculations?

2003-11-19 Thread fn215525
I'd like to know your opinion on monetary calculations.
For accurate representation of monetary (euro) values I use DECIMAL fields 
(12,5).

Now: how to do safe calculations?
Are these commands safe?

1) SELECT SUM(price) FROM ...
2) SELECT SUM(price * 1.2) FROM ...
3) SELECT SUM(price) * 1.2 FROM ...

Do you think I should do all these things in PHP?

Bye

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Execute shell script

2003-11-19 Thread adburne






Thanks Paul for your reply,I extend a little more myquestion and try to be more specific:

There is a command or function to call a shell script through a mysql server on a linux boxusing odbc as client?

---Mensaje original---


De: Paul DuBois
Fecha: martes 18 de noviembre de 2003 18:41:48
A: adburne; [EMAIL PROTECTED]
Asunto: Re: Execute shell script

At 14:05 -0600 11/18/03, Paul DuBois wrote:
At 4:47 PM -0300 11/18/03, adburne wrote:
There is a command or function to call a shell script through mysql?

Invoke mysql, then issue a \h command and look in the output for
the line that begins with "system".

I forgot to mention: The system command is Unix-only.

--
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified? http://www.mysql.com/certification/
.







_ IncrediMail - El E-mail ha evolucionado finalmente - Haga clic aquí

Re: Replacing Multiple Subqueries

2003-11-19 Thread Héctor Villafuerte D.
Héctor Villafuerte D. wrote:

Hi all,
I know how to do this query with subqueries like this:
select * from traf_oper where
  rutasalien in
  (select ruta_salid from rutas where codigo_ope = 0)
  and
  rutaentran in
  (select ruta_salid from rutas where codigo_ope  0)
--

I tried to do this:

mysql explain select a.* from traf_oper a join rutas b on 
a.rutasalien = b.ruta_salid where b.codigo_ope = 0 and
a.rutaentran = b.ruta_salid where b.codigo_ope  0;
ERROR 1064: You have an error in your SQL syntax.  Check the manual 
that corresponds to your MySQL server version
for the right syntax to use near 'where b.codigo_ope  0' at line 1

How can I substitute multiple subqueries with JOIN's?
Thanks in advance.
Hi guys, just to let you know how I solved it!

select a.* from traf_oper a
  join rutas r1
  on a.rutasalien = r1.ruta_salid and r1.codigo_ope = 0
  join rutas r2
  on a.rutaentran = r2.ruta_salid and r2.codigo_ope  0
Nice, isn't it? :)
Hector
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


date_format, distinct and binary chars

2003-11-19 Thread tjkuhn
Okay, here's an interesting one. Here's the query:

SELECT distinct date_format(auditdate, %Y%M ) as listUrl, date_format(
auditdate, %Y%m ) as blank
  FROM quality_history
  WHERE auditdate  Now()
  ORDER BY listUrl desc
  LIMIT 6

auditdate is a DATE datatype

When I run this query the result datatype for both listUrl and blank is a
BINARY CHAR! Is that correct? Why a BINARY? Also, when I take out the
DISTINCT I get back a TIMESTAMP (pretty weird, huh?). I'm obviously missing
something. Any ideas on how I can return just a regular CHAR (or TEXT, or
VARCHAR, or DATE, or anything that looks like a DATE with the format of
%Y%M)? By the way, I'm running 3.23.51 on Windows 2000

Thanks in advance!



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: can't start server

2003-11-19 Thread aman raheja
Do check the ownership on /var/lib/mysql
if not right, then
chown -R mysql /var/lib/mysql
I am getting out of ideas here. Would recommend to uninstall all the 
mysql packages - mysql-server, mysql-client and other you have installed 
and start afresh.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

Hello again

I installed mysql from rpm.
I did check the status of the server with ps -ef | grep mysqld
and indicated athat the demoen is not running.
I tried to start the server by
/etc/init.d/mysqld start and the error 2002
appeared again
In my.cnf file I have
[mysqld]
socket=/tmp/mysqld.sock
basedir=/var/lib/mysql/mysql
datadir=/var/lib/mysql
[client]
socket=/tmp/mysqld.sock


-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 2:27 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: can't start server
Do try ps -ef | grep mysqld
to check the server is currently running.
If not then do
/etc/init.d/mysqld start
Let know if there's an error at this point. Also show your my.cnf

Did you use rpm or source to install mysql?

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:

 

Thank you again for your help.

I installed the 4.0 version of Mysql on a red hat 8.0 OS.
I am trying to use the database to create a forum (with PHP) for
discussion
It is a production server but with no critical status
I was able to start Mysql a couple months ago but somehow it went down
and since then I am
constantly getting the error message of : error 2002: can not connect
to server through socket /var/lib/mysql/mysql.sock
when I try to start it.
I am not expert but I can do some things around the server if I am
pointed to the right direction.
Thanks

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

   -Original Message-
   From: aman raheja [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, November 19, 2003 12:28 PM
   To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
   Subject: Re: can't start server
   Is this a production server?
   THIS IS not a good advice but If not then just step down to a
   lower init level turning off the network services and then come
   back into level 3/5 whatever you are using, depending on platform.
   Then try to start the server. If you would tell what platform you
   are running on and what version of mysql you are using, will help
   you get the solution faster - not the restarting the services kind :)
   Aman Raheja
   AGF Technologies
   http://www.agftech.com
   [EMAIL PROTECTED] wrote:

   

Aman, thank you for your reply.

I did check the ports with netstat and it appears that no other
application is using port 3306.
I am tempting to uninstall mysql and then reinstall it again but
I do not know how to uninstall it the easy way.
Thanks again

[EMAIL PROTECTED]

-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 9:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: can't start server
did you try netstat and see if something else is using the port.

Aman Raheja
AGF Technologies
http://www.agftech.com
[EMAIL PROTECTED] wrote:
 

I am sure that this problem has already been answered but some how I
   

can't
 

figure it out.

When I try to start the mysqld I get the error 2002 can't start server:

   

Bind
 

or TCP/IP port: Address already in use or...

I check that the damoen is not runnig (ps -ef |grep mysqld ) and shows
nothing running. But I just can start it.
Someone there can help me with easy step to fix this?

Thanks
   




RE: Mysql server time setting.

2003-11-19 Thread Jeff McKeon
I'm running mysql on a redhat system starting it from a script in
/etc/init.d/ as is the default with the rpm install.  I can't seem to
set the timezone environmental variable to change nomatter what I put in
the /etc/my.cnf file.

Can anyone help me with this?

Jeff

 -Original Message-
 From: Jeff McKeon 
 Sent: Wednesday, November 19, 2003 4:54 PM
 To: [EMAIL PROTECTED]
 Subject: Mysql server time setting.
 
 
 I have two servers replicating.  Server one to server two.
 
 If I do a show variables on server one it show the timezone 
 as EST, on server two it shows it as GMT.  Where is this set 
 and how can I change server one to GMT?
 
 The linux box itself, that mysql server one runs on is set for GMT
 
 Thanks,
 
 Jeff
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql? [EMAIL PROTECTED]
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SHOWing temporary tables

2003-11-19 Thread Héctor Villafuerte D.
Hi all,
How can I see the temporary tables in a database?
Is there something like SHOW TEMPORARY TABLES?
Thanks,
Hector
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Running without logfiles

2003-11-19 Thread Jeff Mathis
i run the same configuration .

I have the following settings in my /etc/my.cnf file

innodb_log_archive=0
innodb_flush_log_at_trx_commit=0
#log-update = /export/disk1/mysql/logs/snow

once you have the first set to 0, the other 2 may not matter.




Arnoldus Th.J. Koeleman wrote:
 
 I am using Innodb Tables but I Like to run this database without
 creating any logfiles.
 
 Is there a way that Mysql doesn't create any logfiles at all . I can
 recover from a master system so this database can or is allowed to loose
 the data

-- 
Jeff Mathis, Ph.D.  505-955-1434
The Prediction Company  [EMAIL PROTECTED]
525 Camino de los Marquez, Ste 6http://www.predict.com
Santa Fe, NM 87505

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



check point command

2003-11-19 Thread kp gbr
How do I perform checkpoint in MySQL.
 
kp


-
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard

Re: Standard vs. Log

2003-11-19 Thread Matt W
Hi Jon,

The -log suffix is added when you're running with logging (log or
log-bin in my.cnf/my.ini). log-bin may be being used for replication, so
be careful about removing it. And if one server isn't using logging, you
probably don't need it.


Hope that helps.


Matt



- Original Message -
From: Jonathan Rosenberg
Sent: Tuesday, November 18, 2003 11:21 PM
Subject: Standard vs. Log


 I have mysql on linux on two machines.  Both version 4.0.16.  On one
 machine, it is reported as 4.0.16-standard and on the other machine it
is
 reported as 4.0.16-log (according to PHPMyAdmin).  What is the
difference
 (obviously something with logging) and how can I change from standard
to log
 and vice versa?

 Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: using temporary / using filesort and disk tables

2003-11-19 Thread Matt W
Hi Arnaud,

A disk-based temp table is used if you're SELECTing a column [that can
be] longer than 255 characters (BLOB/TEXT to MySQL). This is because the
in memory HEAP tables don't currently support variable length rows.
Using something like LEFT(text_col, 255), if feasible, will get around
this problem.

Also could be disk based if the query examines many rows (large temp
table), but your tmp_table_size would probably cover that.

BTW, 512M is very, very high for tmp_table_size! Do you have enough
memory for 512M * number of connections? :-)


Matt


- Original Message -
From: Arnaud
Sent: Wednesday, November 19, 2003 2:18 AM
Subject: using temporary / using filesort and disk tables


 Hi!

 I have a query that allways creates temporary tables to disk (the
ratio
 created_tmp_disk_tables/created_tmp_tables is close to 0.8, but I
think it
 is 1 for this particular query).
 This query joins 4 tables, groups by a field a orders by another field
(or
 sometimes an sql variable).
 When I analyze it, I get where used, using temporary, using filesort
for
 the first table of the join (whatever the order of the join), and
where
 used for the other ones.
 I have only 2000 rows scanned forthe first table, and 1 for the 3
other
 ones.
 The variables tmp_table_size and max_heap_table_size are both set very
high
 (~512M).

 I would like to get rid of those disk tables, to improve the
performance of
 my query. I understand that using a group by and order by on different
 fields implies the use of a temporary table. What I don't understand
is why
 this table is created on disk, and not in memory? Is it because of the
 filesort? If yes, how could I get rid of the filesort?

 If this is not clear enough, I can post a sample query and the result
of the
 explain.

 Thanks in advance!

 Arnaud


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: can't start server

2003-11-19 Thread mo toufali
Thanks again Aman
I think I will uninstall it and start afresh.
I will let you know how I did.
mo toufali
[EMAIL PROTECTED]
  -Original Message-
  From: aman raheja [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, November 19, 2003 5:54 PM
  To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
  Subject: Re: can't start server


  Do check the ownership on /var/lib/mysql
  if not right, then
  chown -R mysql /var/lib/mysql

  I am getting out of ideas here. Would recommend to uninstall all the mysql
packages - mysql-server, mysql-client and other you have installed and start
afresh.

  Aman Raheja
  AGF Technologies
  http://www.agftech.com


  [EMAIL PROTECTED] wrote:

Hello again

I installed mysql from rpm.
I did check the status of the server with ps -ef | grep mysqld
and indicated athat the demoen is not running.

I tried to start the server by
/etc/init.d/mysqld start and the error 2002
appeared again

In my.cnf file I have
[mysqld]
socket=/tmp/mysqld.sock
basedir=/var/lib/mysql/mysql
datadir=/var/lib/mysql
[client]
socket=/tmp/mysqld.sock




-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 2:27 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: can't start server


Do try ps -ef | grep mysqld
to check the server is currently running.
If not then do
/etc/init.d/mysqld start

Let know if there's an error at this point. Also show your my.cnf

Did you use rpm or source to install mysql?

Aman Raheja
AGF Technologies
http://www.agftech.com

[EMAIL PROTECTED] wrote:

  Thank you again for your help.

I installed the 4.0 version of Mysql on a red hat 8.0 OS.
I am trying to use the database to create a forum (with PHP) for
discussion
It is a production server but with no critical status
I was able to start Mysql a couple months ago but somehow it went down
and since then I am
constantly getting the error message of : error 2002: can not connect
to server through socket /var/lib/mysql/mysql.sock
when I try to start it.
I am not expert but I can do some things around the server if I am
pointed to the right direction.

Thanks

[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]


-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 12:28 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: can't start server

Is this a production server?
THIS IS not a good advice but If not then just step down to a
lower init level turning off the network services and then come
back into level 3/5 whatever you are using, depending on platform.

Then try to start the server. If you would tell what platform you
are running on and what version of mysql you are using, will help
you get the solution faster - not the restarting the services kind :)

Aman Raheja
AGF Technologies
http://www.agftech.com


[EMAIL PROTECTED] wrote:

Aman, thank you for your reply.

I did check the ports with netstat and it appears that no other
application is using port 3306.

I am tempting to uninstall mysql and then reinstall it again but
I do not know how to uninstall it the easy way.

Thanks again

[EMAIL PROTECTED]

-Original Message-
From: aman raheja [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 9:20 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: can't start server


did you try netstat and see if something else is using the port.

Aman Raheja
AGF Technologies
http://www.agftech.com

[EMAIL PROTECTED] wrote:
  I am sure that this problem has already been answered but some how I
can't
  figure it out.

When I try to start the mysqld I get the error 2002 can't start server:

Bind
  or TCP/IP port: Address already in use or...


I check that the damoen is not runnig (ps -ef |grep mysqld ) and shows
nothing running. But I just can start it.


Someone there can help me with easy step to fix this?

Thanks



Re: very slow delete queries - never ending

2003-11-19 Thread Matt W
Hi Richard,

As I think Gerald Clark said, you could run DELETEs with LIMITs (like
1000-1, etc. at a time) in a loop until rows all rows are deleted.
This won't make the deletes any faster (probably slightly slower total,
actually), but will allow other clients to use the table in between.

Have you seen this page in the manual:
http://www.mysql.com/doc/en/Delete_speed.html What's the size of your
key_buffer? Might want to increase it.

Also make sure the table doesn't have any unnecessary indexes to make
DELETEs slower.


Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 4:12 AM
Subject: very slow delete queries - never ending


 I have a problem with a bigger table on mysql 4.0.16-log / debian
linux

 I played around with indexes, delete quick and such, but I just can't
get it
 to work.
 The following table holds 35mio rows and has 5mio inserts/replaces
per
 day. to clean it up I want to delete all rows older than X days.

 I would be very happy if somebody could help me on this. I'm stuck. I
worked
 with tables of that size with 3.23.49-log and didn't have problems,
although
 I must say that the amount of inserts is very high in this case.

 The server is a 2.5ghz pentium4, 1.5gb RAM, SCSI-RAID-disks and such
 hardware, so performance should not be a problem. what variables in
mysql should I
 modify, has anybody experience with that and can
 help?

 thanks!
 Richard

[snip]

 mysql  select count(*) from  datatable  where acttime  '2003-11-14
 09:39:49';
 +--+
 | count(*) |
 +--+
 |  7194367 |
 +--+
 1 row in set (3 min 22.15 sec)

 mysql select count(*) from datatable;
 +--+
 | count(*) |
 +--+
 | 36003669 |
 +--+
 1 row in set (5.87 sec)

 mysql delete quick  from datatable  where acttime  '2003-11-14
09:39:49';
 or
 mysql delete from datatable  where acttime  '2003-11-14 09:39:49';

 ...takes forever. I killed it after 20 hours...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN vs INNER JOIN?

2003-11-19 Thread Matt W
Hi Yves,

http://www.mysql.com/doc/en/JOIN.html

table_reference [INNER | CROSS] JOIN table_reference [join_condition]

The [ ... ] means that INNER is optional -- in MySQL at least, not
sure about the SQL standard.


Hope that helps.


Matt


- Original Message -
From: Yves Goergen
Sent: Wednesday, November 19, 2003 8:12 AM
Subject: JOIN vs INNER JOIN?


 Can someone tell me the difference between a JOIN and an INNER JOIN of
two
 tables, please? I can't find the JOIN alone documented in the MySQL
 manual.

 --
 Yves Goergen
 [EMAIL PROTECTED]
 Please don't CC me (causes double mails)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing Custom Full Text Index

2003-11-19 Thread Matt W
Hi Mike,

Those tables aren't that big for what you're doing (which is about how
I'd do it if I wasn't using built-in full-text :-)).

How many results are your searches returning? How long are the queries
taking?

The C table: Do you need the index on content_id? For deletes or
something? Doesn't really matter for speed, but you might as well dump
it if it's not used. What's the PRIMARY id there for? Just to have an id
column? :-) It looks like you can get rid of it. I'd make a composite
PRIMARY KEY on (stem_word_id, content_id). Otherwise make it UNIQUE if
you *need* the PRIMARY id. This will make EXPLAIN say Using index on C
for searches which will save a lot of random disk seeks to the data
file.

Can I ask what the problems are with MySQL's built-in full-text search?
I know there's a few since I've encountered them too, but I have some
ideas to work around them. Unfortunately, one that would be hard to work
around is stemming (waiting for that to be implemented internally).
-( Or are you just doing stemming to save space in the index and not for
functionality?


Hope that helps.


Matt


- Original Message -
From: Mike Boone
Sent: Wednesday, November 19, 2003 12:08 PM
Subject: Optimizing Custom Full Text Index


 Hi all,

 I have used PHP and MySQL for years but have never tackled the amount
of
 data that I now have (and it's expected to grow considerably). My
queries
 run OK when the server is not busy but they just about kill the DB
when
 traffic picks up.

 My apologies if this is common knowledge...I've had trouble searching
on
 custom full text indexing because it generally brings up hits
regarding the
 built-in full text indexing for various DB servers. MySQL's built-in
 fulltext doesn't quite do what we want.

 We are currently running MySQL 4.0.16 compiled with LinuxThreads on
FreeBSD.


 Basically, I'm trying to optimize a search involving three tables.

 Table A: (content table...currently nearly 40,000 rows and 62 MB)
  id UNSIGNED INT PRIMARY
  status VARCHAR 10 INDEXED
  category VARCHAR 20 INDEXED
  content LONGTEXT
  + other fields

 Table B: (stem word index...instead of indexing the exact word, I just
keep
 the stem, so 'car' and 'cars' are both stored as 'car'. Table
currently has
 about 180,000 rows and is 9 MB)
  id UNSIGNED INT PRIMARY
  stem_word VARCHAR 30 INDEXED

 Table C: (full text index...currently about 4.5 million rows and 186
MB)
  id UNSIGNED INT PRIMARY
  stem_word_id (references id in table B) UNSIGNED INT INDEXED
  content_id (references id in table A) UNSIGNED INT INDEXED


 Here's how I perform the search right now.

 The user enters keywords. I turn those words into a list of unique
stems. I
 then search for the stem IDs from Table B using the following query:

 SELECT id FROM B WHERE stem_word IN ('truck','piano','move');

 Using the IDs from that query (say 10, 20, 30), I run the following
query:

 SELECT C.content_id, COUNT(C.content_id), A.category FROM A, C WHERE
 C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN
(10,20,30))
 GROUP BY C.content_id HAVING Count(C.content_id)=3;

 I have recently also tried this query, which is a little cleaner
without the
 count/having stuff, but it seems about the same speed-wise:

 SELECT DISTINCT C0.content_id, A.category FROM A, C C0, C C1, C C2
WHERE
 C0.content_id=A.id AND A.status='Active' AND C0.stem_word_id=10 AND
 C1.stem_word_id=20 AND C0.content_id=C1.content_id AND C2.stem_word=30
AND
 C0.content_id=C1.content_id;

 When running the EXPLAIN on both queries, both are doing 'using where;
using
 temporary; using filesort' on table C. I'm not sure how to avoid that.

 This system has 512MB and I'm basically using the my-large.cnf file
as-is.
 Running mytop shows that the key efficiency is 100%, and (cache?)
ratio
 around 36%. All my tables are MyISAM right now. I tried switching to
InnoDB
 but it was much slower and I figured there were enough variables to
 troubleshoot already without playing around with the InnoDB
parameters.

 So my questions:

 1. Is there anything blatantly wrong with my queries?
 2. Should I have designed my index table differently?
 3. Any parameter in my.cnf I should modify to be different from the
 my-large.cnf settings?
 4. Any web resources with instructions for building customized full
text
 indexing...not using built-in stuff?
 5. Since the content field of table A is only used for display (since
the
 words have been indexed), I was considering compressing the text in
that
 field so save DB disk space. Is that worth the effort?

 Any input is appreciated. Thanks for your help.

 Mike Boone
 (reply to the list or contact me directly at:
 http://boonedocks.net/mailmike.php3)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Easy (?) SELECT questions

2003-11-19 Thread cpuworks
 Mark Wilson [EMAIL PROTECTED] wrote:
 Two related questions.
 1. I have a table with a date field. (Ex: '04/13/2002'). I want to get
 a list
 of all UNIQUE values for that field (many entries from the same day),
 i.e., all
 days with entries.
 **
 CREATE TABLE metrics_events_power {
  mep_id int(11) NOT NULL auto_increment,
  mep_date text,
  mep_time time DEFAULT '00:00:00' NOT NULL
 }
 INSERT INTO metrics_events_power VALUES ('1', '04/13/2002', '10:41:19');
 INSERT INTO metrics_events_power VALUES ('1', '04/13/2002', '10:46:19');
 INSERT INTO metrics_events_power VALUES ('1', '04/14/2002', '11:51:19');
 **

 I want a query that returns for this data '04/13/2002' and '04/14/2002'.

 Use WHERE clause to set retrieval conditions for rows, f.e WHERE
 mep_date='04/13/2002' OR mep_date='04/14/2002', and GROUP BY mep_date.

Not exactly. I want the retrieval to be automatic. I don't want to know in
advance what the unique dates are; I want the query to tell me which
unique dates exist. So for this data, two dates would be returned. For
other data, more or fewer dates.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Optimizing Custom Full Text Index

2003-11-19 Thread Mike Boone
Matt, thanks for your suggestions.

The index on content_id was for deletes. You're right...I could get rid of
the C.id field...it has not been useful.

To avoid making too many changes at once, I created a UNIQUE index on
stem_word_id and content_id. Now EXPLAIN says it's using the UNIQUE index,
but also:

Using where; Using index; Using temporary; Using filesort

I'm not sure if that's an improvement or not. I don't notice any speed
changes.

Searches can return any number of results, say a hundred or so on average.
I'm using LIMIT for 25 results at a time, but also SQL_CALC_FOUND_ROWS to
know how many total results there are. Some queries can take 30 or more
seconds to run, average is maybe 10 seconds.

The reason I'm not using MySQL's fulltext: this project started on a 3.23.x
server where I had no control over the setup parameters (hosted system).
There are several important words used that are  4 characters. So I went to
building my own index and using word stems (requested by the client). The
searches then ran fairly quickly (faster than now), so I'm sure some of this
has to do with the hardware/software configuration of that server, but I
don't know what it was. The project was then moved to a new server, the one
I described. I have contol over the MySQL parameters but I think the
hardware/software combo is not as powerful. I installed 4.0.16 and am trying
to get improve the performance of my existing stem-index search.

My word search on table B is also slow at times. I wonder if I should split
the word search query into separate queries to take advantage of caching. So
if one person searches piano move and the other play piano and someone
else piano instruct the results for piano would be cached.

So many variables to consider!

Thanks again for your input.
Mike.

-Original Message-
From: Matt W [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 9:25 PM
To: Mike Boone; [EMAIL PROTECTED]
Subject: Re: Optimizing Custom Full Text Index


Hi Mike,

Those tables aren't that big for what you're doing (which is about how
I'd do it if I wasn't using built-in full-text :-)).

How many results are your searches returning? How long are the queries
taking?

The C table: Do you need the index on content_id? For deletes or
something? Doesn't really matter for speed, but you might as well dump
it if it's not used. What's the PRIMARY id there for? Just to have an id
column? :-) It looks like you can get rid of it. I'd make a composite
PRIMARY KEY on (stem_word_id, content_id). Otherwise make it UNIQUE if
you *need* the PRIMARY id. This will make EXPLAIN say Using index on C
for searches which will save a lot of random disk seeks to the data
file.

Can I ask what the problems are with MySQL's built-in full-text search?
I know there's a few since I've encountered them too, but I have some
ideas to work around them. Unfortunately, one that would be hard to work
around is stemming (waiting for that to be implemented internally).
-( Or are you just doing stemming to save space in the index and not for
functionality?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



archive data

2003-11-19 Thread Hiu Yen Onn
hi,

i am using linux redhat 8 to power mysql. i installed mysql version 4.0.13
using rpm package. by default, the data file of mysql is alocated at
/var/lib/mysql, but i have a limited of diskspace of /var dir (it is about
1G), so, it was easily full loaded with data. now, i want to archive the
data into another dir. how can i switch the dir such that, i can have a
spacious diskspace to store my data. now, i am facing a problem that i cant
write into the db because of the no diskspace left.

how can i archive my data, then, switch the storing location to another dir
(previously it was /var/lib/mysql) which has bigger diskspace. then, restore
the data. how can i do this? pls, advise. thanks

Cheers,
yenonn


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL

2003-11-19 Thread Theresa Tan
To whom it may concern,

Knowing that MySQL is an open source software I'm in favour of this
application and wld
like to try out this product, wonder why I can download this application
for my necessary
practice for both Client/Server application.


Tks.

**

Best Regards
Theresa Tan
Schenker (Asia Pacific) Pte Ltd
Regional Support Centre / Secretary to Chief Information Officer
No. 2 Changi South Ave 1
Singapore 486149

Tel: 65 6 245 5022
Fax: 65 6 245 5145

mailto:[EMAIL PROTECTED]
Schenker Website : http://www.schenker.com.sg
Disclaimer : http://rsc.schenker.com.sg/rscweb/disclaimer.htm


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

batch files usage

2003-11-19 Thread r.schacht
Hi,

I'm new to MySQL. I want to try to create a new database, but I type in the
same commands all the time.

What I want to be able to do is type the commands into text files, then run
them from inside MySQL. This way if I want to change something I can just
drop the database and recreate.

Any help would be appreciated.







Randy Schacht
Home (978) 263-3799
Cell (508) 380-1492

Randy Schacht

http://r.schacht.home.comcast.net


Cannot find an index that will be used for SELECT

2003-11-19 Thread Ed McNierney
I'm completely stumped trying to create any index MySQL will use at all
for my SELECT.
 
I've got a table with four columns describing the upper-right and
lower-left coordinates of a rectangle - RIGHT, LEFT (max X, min X), TOP,
BOTTOM (max Y, min Y).  I have an X, Y coordinate and I want to select
all rows for which the X, Y point is inside the rectangle.  In other
words:
 
SELECT * FROM ROWS WHERE X = RIGHT AND X = LEFT AND Y = BOTTOM AND Y
= TOP
 
(using BETWEEN didn't make any difference).
 
I have tried indexes on RIGHT, LEFT, TOP, BOTTOM, RIGHT+LEFT,
TOP+BOTTOM, and RIGHT+LEFT+TOP+BOTTOM and none of them get used.  All
fields are FLOAT.  I can't figure out how to get any index to be used,
nor how to restructure my query to improve things.  Thanks!
 
- Ed
 
Ed McNierney
President and Chief Mapmaker
TopoZone.com
[EMAIL PROTECTED]
 


Re: batch files usage

2003-11-19 Thread Dan Wilterding
On 19 Nov 2003 at 23:12, r.schacht wrote:

 Hi,
 
 I'm new to MySQL. I want to try to create a new database, but I type in
 the same commands all the time.
 
 What I want to be able to do is type the commands into text files, then
 run them from inside MySQL. This way if I want to change something I can
 just drop the database and recreate.
 
 Any help would be appreciated.
 
 
 
 
 
  
 
 Randy Schacht
 Home (978) 263-3799
 Cell (508) 380-1492
 
 Randy Schacht
 
 http://r.schacht.home.comcast.net
 

Have a look at http://www.mysql.com/doc/en/index.html .  Chapter 3 is 
an online tutorial that I am working through now  it seems to be 
pretty good. 
 
Dan Wilterding
[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL

2003-11-19 Thread David Lloyd

Theresa,

 Knowing that MySQL is an open source software I'm in favour of this
 application and wldlike to try out this product, wonder why I can
 download this application for my necessary practice for both 
 Client/Server application.

I assume you mean How can I download the Mysql server and client as
opposed to Why can I download it. Here is a good start:

 * http://www.mysql.com/downloads/index.html

The documentation (check the documentation link) gives more details. There
are also books you can purchase:

 * Paul DuBois' Using MySQL (New Riders) is very good

DSL

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL lost connection

2003-11-19 Thread John Cole
hi,

I installed MySQL 4.0.16-nt on a Win2000 Pro platform.
 I want to implement a multi-table delete, so i type
following SQL statement:

Delete from tablea, tableb using tablea, tableb
where tableb.number=2 and tableb.number=3;

the server down and generate following message:

`Mysql-nt has generated errors and will be close by
Windows.  You will need to restart the program. ERROR
2013: Lost connection to MySQL server during query.'

I know the above SQL statement has problem, however i
don't know why the server service will be stopped.  I
try the same SQL statement on MySQL 4.0.15-Max Linux
platform, the server won't down, however it generate
following message:

`Lost connection to MySQL server during query'

and the service will be automatically restarted.

Can anyone know the answer?  any reply will be
appreciated, thanks. 


Want to chat instantly with your online friends?  Get the FREE Yahoo!
Messenger http://mail.messenger.yahoo.co.uk

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



is it possible?

2003-11-19 Thread Eugene R. Miller
This is kind of a silly question is there an easy way to ...

SELECT Status, rating, COUNT(*) FROM song GROUP BY pldupldqd, rating ORDER BY 
Status, rating DESC

This gives me all the information I need ...

What I would like to do is something... like

SELECT rating, count(WHERE pldupldqd = 0), count (WHERE pldupldqd = 1), COUNT 
(WHERE pldupldqd = 2) FROM song GROUP BY rating


The idea is to get...

3 different columns counting what it equals.

Erm
---
www.the-erm.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]