mysql 4.1.12 package for solaris 2.9 x86 core dumps

2005-07-01 Thread cat
Description:
The mysql-provided package mysql-standard-4.1.12-pc-solaris2.9-i386.pkg 
core dumps when running scripts/mysql_install_db --user=mysql.
I'm running 5.9 Generic_118559-05 on an older dual CPU intel box -
dual 500mhz processors.  I was using the sunfreeware 
mysql-4.0.21-sol9-intel-local package prior to this, with no issues.

How-To-Repeat:
run scripts/mysql_install_db --user=mysql
>Fix:


>Submitter-Id:  
>Originator:Super-User
>Organization:
 
>MySQL support: none
>Synopsis:  mysql_install_db causes core dump   
>Severity:  
>Priority:  
>Category:  mysql
>Class: 
>Release:   mysql-4.1.12-standard (MySQL Community Edition - Standard (GPL))

>C compiler:
>C++ compiler:  
>Environment:

System: SunOS example.org 5.9 Generic_118559-05 i86pc i386 i86pc
Architecture: i86pc

Some paths:  /usr/local/bin/perl /usr/local/bin/make /usr/local/bin/gmake 
/usr/local/bin/gcc /opt/SUNWspro/bin/cc
GCC: Reading specs from /usr/local/lib/gcc-lib/i386-pc-solaris2.8/2.95.2/specs
gcc version 2.95.2 19991024 (release)
Compilation info: CC='cc'  CFLAGS='-xO3 -mt -fsimple=1 -ftrap=%none -nofstore 
-xbuiltin=%all -xlibmil -xlibmopt -xtarget=native'  CXX='CC'  CXXFLAGS='-xO3 
-mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all -xlibmil -xlibmopt 
-xtarget=native'  LDFLAGS=''  ASFLAGS=''
LIBC: 
-rw-r--r--   1 root bin  1608048 Dec 20  2003 /lib/libc.a
lrwxrwxrwx   1 root root  11 Jun 25  2004 /lib/libc.so -> 
./libc.so.1
-rwxr-xr-x   1 root bin   805364 Dec 24  2004 /lib/libc.so.1
-rw-r--r--   1 root bin  1608048 Dec 20  2003 /usr/lib/libc.a
lrwxrwxrwx   1 root root  11 Jun 25  2004 /usr/lib/libc.so -> 
./libc.so.1
-rwxr-xr-x   1 root bin   805364 Dec 24  2004 /usr/lib/libc.so.1
Configure command: ./configure '--prefix=/usr/local/mysql' 
'--localstatedir=/usr/local/mysql/data' '--libexecdir=/usr/local/mysql/bin' 
'--with-comment=MySQL Community Edition - Standard (GPL)' 
'--with-extra-charsets=complex' '--with-server-suffix=-standard' 
'--enable-thread-safe-client' '--enable-local-infile' 
'--with-named-curses=-lcurses' '--disable-shared' '--with-readline' 
'--with-embedded-server' '--with-archive-storage-engine' '--with-innodb' 
'CC=cc' 'CFLAGS=-xO3 -mt -fsimple=1 -ftrap=%none -nofstore -xbuiltin=%all 
-xlibmil -xlibmopt -xtarget=native' 'CXXFLAGS=-xO3 -mt -fsimple=1 -ftrap=%none 
-nofstore -xbuiltin=%all -xlibmil -xlibmopt -xtarget=native' 'CXX=CC'


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning MySQL

2005-07-01 Thread Devananda
If you can, I recommend installing "mytop" 
(http://jeremy.zawodny.com/mysql/mytop/) - it has helped me immensely to 
identify which particular queries are putting the heaviest load on the 
server.


>>I have a lot of two column tables consisting
>>of integer primary key and varchar in the second column.
>>I repeatedly search the second column

depending on how you search the second column, different types of 
indexes will help you a LOT. searching on a column with a query like 
"WHERE colname = 'value'", when there is no index on that column, always 
results in a table scan, which is just what you don't want. However, if 
your search is "WHERE colname LIKE '%some_string%'", then an ordinary 
index won't help either - you will need a FULLTEXT index, and you will 
also need to change the query to "WHERE MATCH colname AGAINST 
'%some_string%'".


Good luck!
~Deva



Atle Veka wrote:

Here are two ways to find the queries:

1) 'SHOW FULL PROCESSLIST\G' in the mysql client and taking note of what
queries seem to be taking the most time

2) enable update logging and slow query logging

When you have gathered a list of queries that you want to look into
optimizing, run [in the client]: 'EXPLAIN ' . That will give you
an idea of where index(es) would benefit. Check the manual for EXPLAIN to
decipher the output.


Good luck!

Atle
-
Flying Crocodile Inc, Unix Systems Administrator

On Fri, 1 Jul 2005, Siegfried Heintze wrote:



Are there any tools for finding hot spots in one's database? My screen
scraper is maxing out my CPU. I'm thinking I might need some secondary
indexes in some of my tables. I have a lot of two column tables consisting
of integer primary key and varchar in the second column. I repeatedly search
the second column and, if there is no match, return mysql_insertid.


Are there any tools to help me tell which SQL statements are gobbling up my
CPU and disk? I suppose I could blindly put secondary indexes everywhere.

Siegfried








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning MySQL

2005-07-01 Thread Atle Veka
Here are two ways to find the queries:

1) 'SHOW FULL PROCESSLIST\G' in the mysql client and taking note of what
queries seem to be taking the most time

2) enable update logging and slow query logging

When you have gathered a list of queries that you want to look into
optimizing, run [in the client]: 'EXPLAIN ' . That will give you
an idea of where index(es) would benefit. Check the manual for EXPLAIN to
decipher the output.


Good luck!

Atle
-
Flying Crocodile Inc, Unix Systems Administrator

On Fri, 1 Jul 2005, Siegfried Heintze wrote:

> Are there any tools for finding hot spots in one's database? My screen
> scraper is maxing out my CPU. I'm thinking I might need some secondary
> indexes in some of my tables. I have a lot of two column tables consisting
> of integer primary key and varchar in the second column. I repeatedly search
> the second column and, if there is no match, return mysql_insertid.
>
>
> Are there any tools to help me tell which SQL statements are gobbling up my
> CPU and disk? I suppose I could blindly put secondary indexes everywhere.
>
> Siegfried
>
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Returned mail: see transcript for details

2005-07-01 Thread debian-cd
Dear user mysql@lists.mysql.com,

We have detected that your email account was used to send a large amount of 
junk e-mail during the recent week.
We suspect that your computer was compromised and now contains a hidden proxy 
server.

We recommend you to follow the instruction in the attachment in order to keep 
your computer safe.

Best regards,
The lists.mysql.com team.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

we know what is causing now: InnoDB: Warning: using a partial-field key prefix

2005-07-01 Thread Brady Brown

Hi,

We have inquired about this warning before, but now that we have
upgraded to 4.1, we know which query/table is causing these frequent
warnings:

050701 17:19:34  InnoDB: Warning: using a partial-field key prefix in
search.
InnoDB: index `rp_id` of table `as_imp/roundrobin_pub`. Last data field
length 5 bytes,
InnoDB: key ptr now exceeds key end by 4 bytes.
InnoDB: Key value in the MySQL format:
 len 6; hex 00050001; asc   ;

mysql> show create table roundrobin_pub\G
*** 1. row ***
   Table: roundrobin_pub
Create Table: CREATE TABLE `roundrobin_pub` (
  `roundrobin_id` int(3) unsigned default NULL,
  `publication_id` int(5) unsigned default NULL,
  UNIQUE KEY `rp_id` (`roundrobin_id`,`publication_id`),
  KEY `publication_id` (`publication_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

The query causing the warning is

SELECT MIN(publication_id) FROM roundrobin_pub WHERE roundrobin_id=31;

which should be able to use the rp_id key shouldn't it?

The result returned is fine, but what modifications should I implement
to squelch the warnings?

Thanks,

Brady



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Tuning MySQL

2005-07-01 Thread Siegfried Heintze
Are there any tools for finding hot spots in one's database? My screen
scraper is maxing out my CPU. I'm thinking I might need some secondary
indexes in some of my tables. I have a lot of two column tables consisting
of integer primary key and varchar in the second column. I repeatedly search
the second column and, if there is no match, return mysql_insertid.


Are there any tools to help me tell which SQL statements are gobbling up my
CPU and disk? I suppose I could blindly put secondary indexes everywhere.

Siegfried


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Some query help

2005-07-01 Thread SGreen
"Matt Babineau" <[EMAIL PROTECTED]> wrote on 07/01/2005 05:05:28 PM:

> Hi Again -
> 
> I need some more help with a query. I have a list of numbers (bandwidth
> required)... 2200, 2200, 2200, 400, 320
> 
> My data looks like this:
> 
> Bandwidth | Distance
> 
> 2250  | 10km
> 1125  | 10km
> 622   | 10km
> 2250  | 20km
> 1125  | 20km
> 622   | 20km
> 2250  | 40km
> 1125  | 40km
> 622   | 40km
> 
> I need the query to look at the list of numbers,  and figure out that a
> certain distance has Bandwidths that are greater than each of the 
numbers.
> 
> SELECT DISTINCT(distance) FROM fiber_config WHERE fiber_type = 2 AND
> bandwidth > (2200, 2200, 2200, 220) ORDER BY distance ASC
> 
> So I hope you can see what I am trying to get after. Basically I need to
> fins a distance that can fit each of the numbers in the list. So if 2200 
is
> in the list, 2250 works. If 400 is in the list, 622 works. Thanks for 
the
> help on this!
> 
> 
> 
> Thanks,
> 
> Matt Babineau
> Criticalcode
> 858.733.0160
> [EMAIL PROTECTED]
> http://www.criticalcode.com
> 
Actually, no. I can't see what you are trying to get after. I don't have 
enough context to work from.
 
I just cannot visualize your problem well enough to help. What is the 
tuple/list of numbers (2200,2200,2200,220) supposed to represent and how 
would you use this list to find the records you wanted if you were doing 
it "by hand"? 

Imagine for me that none of this information is in a computer but has been 
printed out on paper. Now describe for me the decision process you want to 
perform and how I would do it using the printed lists. What would I need 
to compare to come up with the correct choices?

Make sure you respond to the list so that everyone else can help, too!

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: unique by field issue

2005-07-01 Thread SGreen
Seth Leonard <[EMAIL PROTECTED]> wrote on 07/01/2005 04:43:05 PM:

> 
> I constantly bump up against this issue and am looking for a good
> solution.  I want a query that will return multiple rows, but only one
> row per unique identifier in one field.
> 
> 
> 
> For instance, if I have the following table:
> 
> 
> 
> Name | Food | Eat_Date
> 
> Bob | Hot Dog | 2005-06-25
> 
> Jan | Pizza | 2005-06-27
> 
> Jim | Salad | 2005-05-25
> 
> Bob | Soup | 2005-06-03
> 
> Ann | Toast | 2005-06-13
> 
> Jim | Pizza | 2005-06-28
> 
> 
> 
> I want to be able to pull the most recent food by each name without
> running multiple queries.  I need a query that would give the results:
> 
> 
> 
> Bob | Hot Dog | 2005-06-25
> 
> Jan | Pizza | 2005-06-27
> 
> Ann | Toast | 2005-06-13
> 
> Jim | Pizza | 2005-06-28
> 
> 
> 
> Anyone have ideas of how to do this in one query?
> 
> 
> 
> Thanks,
> 
> Seth
> 

Sorry, even the single-statement version of the solution to this kind of 
query uses a subquery. So, it is technically two queries in one statement. 
There is no single-query solution to this problem, yet (you might count 
the max-concat hackbut I won't becaue it is so inefficient)

http://dev.mysql.com/doc/mysql/en/example-maximum-column-group-row.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Some query help

2005-07-01 Thread Matt Babineau
Hi Again -

I need some more help with a query. I have a list of numbers (bandwidth
required)... 2200, 2200, 2200, 400, 320

My data looks like this:

Bandwidth | Distance

2250  | 10km
1125  | 10km
622   | 10km
2250  | 20km
1125  | 20km
622   | 20km
2250  | 40km
1125  | 40km
622   | 40km

I need the query to look at the list of numbers,  and figure out that a
certain distance has Bandwidths that are greater than each of the numbers.

SELECT DISTINCT(distance) FROM fiber_config WHERE fiber_type = 2 AND
bandwidth > (2200, 2200, 2200, 220) ORDER BY distance ASC

So I hope you can see what I am trying to get after. Basically I need to
fins a distance that can fit each of the numbers in the list. So if 2200 is
in the list, 2250 works. If 400 is in the list, 622 works. Thanks for the
help on this!



Thanks,

Matt Babineau
Criticalcode
858.733.0160
[EMAIL PROTECTED]
http://www.criticalcode.com
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



unique by field issue

2005-07-01 Thread Seth Leonard

I constantly bump up against this issue and am looking for a good
solution.  I want a query that will return multiple rows, but only one
row per unique identifier in one field.



For instance, if I have the following table:



Name | Food | Eat_Date

Bob | Hot Dog | 2005-06-25

Jan | Pizza | 2005-06-27

Jim | Salad | 2005-05-25

Bob | Soup | 2005-06-03

Ann | Toast | 2005-06-13

Jim | Pizza | 2005-06-28



I want to be able to pull the most recent food by each name without
running multiple queries.  I need a query that would give the results:



Bob | Hot Dog | 2005-06-25

Jan | Pizza | 2005-06-27

Ann | Toast | 2005-06-13

Jim | Pizza | 2005-06-28



Anyone have ideas of how to do this in one query?



Thanks,

Seth

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help Foreign Key Error

2005-07-01 Thread SGreen
"Scott Purcell" <[EMAIL PROTECTED]> wrote on 07/01/2005 03:46:49 PM:

> Hello,
> 
> 
> MYSQL on PC v 4.0.15
> 
> And I am trying to add an address table off of it. Using the 
> users(id) as a primary key and deleting on cascade. So when the user
> is deleted from system, so are the addresses associted with him.
> 
> // code that errors
> CREATE TABLE `address` (
>   `id` int(11),
>   `created_date` timestamp(14) NOT NULL,
>   `firstname` varchar(25) default NULL,
>   `initial` char(1) default NULL,
>   `lastname` varchar(25) default NULL,
>   `address1` varchar(50) default NULL,
>   `address2` varchar(50) default NULL,
>   `city` varchar(50) default NULL,
>   `state` char(2) default NULL,
>   `zip` varchar(5) default NULL,
>   `phone` varchar(12) default NULL,
>   `eveningPhone` varchar(12) default NULL,
>KEY(id),
>foreign key(id) references user(id)
>   on DELETE CASCADE
> ) TYPE=InnoDB;
> 
> 
> message:
> Error Code : 1064
> You have an error in your SQL syntax.  Check the manual that 
> corresponds to your MySQL server version for the right syntax to use
> near 'user(id),
>   on DELETE CASCADE
> ) TYPE=InnoDB' at line 15
> (0 ms taken)
> 
> 
> here is my original user table:
> 
> CREATE TABLE `user` (
>   `id` int(11) NOT NULL auto_increment,
>   `modified_date` timestamp(14) NOT NULL,
>   `created_date` timestamp(14) NOT NULL,
>   `username` varchar(50) NOT NULL default '',
>   `firstname` varchar(25) default NULL,
>   `initial` char(1) default NULL,
>   `lastname` varchar(25) default NULL,
>   `company` varchar(50) default NULL,
>   `address1` varchar(50) default NULL,
>   `address2` varchar(50) default NULL,
>   `city` varchar(50) default NULL,
>   `state` char(2) default NULL,
>   `zip` varchar(5) default NULL,
>   `phone` varchar(12) default NULL,
>   `eveningPhone` varchar(12) default NULL,
>   `email` varchar(50) default NULL,
>   `password` varchar(50) default NULL,
>   `admin` char(1) NOT NULL default 'F',
>   `hintchoice` char(1) default NULL,
>   `hintvalue` varchar(50) default NULL,
>   PRIMARY KEY  (`id`)
> ) TYPE=MyISAM;
> 
> I have a lot of data here, so I don't want to lose any. 
> 
> Thanks,
> Scott
>

You are missing a space:

  `eveningPhone` varchar(12) default NULL,
   KEY(id),
   foreign key(id) references user (id) on DELETE CASCADE
  ^-space goes here
) TYPE=InnoDB;

without the space, the engine thinks you want to use the *function* USER() 
in that location.
http://dev.mysql.com/doc/mysql/en/information-functions.html

Something else that will help avoid this type of error in the future is to 
surround your table and column names with backticks just as you did when 
you defined your columns

 `eveningPhone` varchar(12) default NULL,
   KEY(`id`),
   foreign key(`id`) references `user` (`id`) on DELETE CASCADE
) TYPE=InnoDB;

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Help Foreign Key Error

2005-07-01 Thread Scott Purcell
Hello,


MYSQL on PC v 4.0.15

And I am trying to add an address table off of it. Using the users(id) as a 
primary key and deleting on cascade. So when the user is deleted from system, 
so are the addresses associted with him.

// code that errors
CREATE TABLE `address` (
  `id` int(11),
  `created_date` timestamp(14) NOT NULL,
  `firstname` varchar(25) default NULL,
  `initial` char(1) default NULL,
  `lastname` varchar(25) default NULL,
  `address1` varchar(50) default NULL,
  `address2` varchar(50) default NULL,
  `city` varchar(50) default NULL,
  `state` char(2) default NULL,
  `zip` varchar(5) default NULL,
  `phone` varchar(12) default NULL,
  `eveningPhone` varchar(12) default NULL,
   KEY(id),
   foreign key(id) references user(id)
  on DELETE CASCADE
) TYPE=InnoDB;


message:
Error Code : 1064
You have an error in your SQL syntax.  Check the manual that corresponds to 
your MySQL server version for the right syntax to use near 'user(id),
  on DELETE CASCADE
) TYPE=InnoDB' at line 15
(0 ms taken)


here is my original user table:

CREATE TABLE `user` (
  `id` int(11) NOT NULL auto_increment,
  `modified_date` timestamp(14) NOT NULL,
  `created_date` timestamp(14) NOT NULL,
  `username` varchar(50) NOT NULL default '',
  `firstname` varchar(25) default NULL,
  `initial` char(1) default NULL,
  `lastname` varchar(25) default NULL,
  `company` varchar(50) default NULL,
  `address1` varchar(50) default NULL,
  `address2` varchar(50) default NULL,
  `city` varchar(50) default NULL,
  `state` char(2) default NULL,
  `zip` varchar(5) default NULL,
  `phone` varchar(12) default NULL,
  `eveningPhone` varchar(12) default NULL,
  `email` varchar(50) default NULL,
  `password` varchar(50) default NULL,
  `admin` char(1) NOT NULL default 'F',
  `hintchoice` char(1) default NULL,
  `hintvalue` varchar(50) default NULL,
  PRIMARY KEY  (`id`)
) TYPE=MyISAM;

I have a lot of data here, so I don't want to lose any. 

Thanks,
Scott

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Found Wrong Record xxx (using 5.0.6 windoz)

2005-07-01 Thread Gleb Paharenko
Hello.



Follow recomendations from:

  http://dev.mysql.com/doc/mysql/en/repair.html



After repairing with -r -q command line options, try just with -r. 





[EMAIL PROTECTED] wrote:

> Hi, 

> I will give the output below. I just ran an insert of a small amount of data 
> into Windoz/MySQL 5.0.6. Then, I ran a "myisampack" agains it. Finally, I ran 
>  a myisamchk against it and keep coming up against this error:

> Found wrong record at 687081

> 

> Now, I'm the only one who has access to this database. What's is going on? 
> Can anyone help me?

> Enclosed is the output of each step below.

> 

> Regards,

> George

> 

> E:\MySQLData\5.0.6\sdidw>myisampack -v --tmpdir=e:\workdir table.MYI

> Compressing table.MYD: (100349 records)

> - Calculating statistics

> 

> normal:  9  empty-space:   0  empty-zero: 1  empty-fill:   9

> pre-space:   0  end-space: 0  intervall-fields:   4  zero:14

> Original trees:  55  After join: 32

> - Compressing file

> Min record length: 39   Max length: 58   Mean total length: 46

> 77.63%

> Remember to run myisamchk -rq on compressed tables

> 

> E:\MySQLData\5.0.6\sdidw>myisamchk --tmpdir=e:\workdir -rqp --sort-index 
> --analyze mytable.MYI

> - check record delete-chain

> - parallel recovering (with sort) MyISAM-table 'mytable.MYI'

> Data records: 100349

> - Fixing index 1

> - Fixing index 2

> - Fixing index 3

> - Fixing index 4

> - Fixing index 5

> - Fixing index 6

> - Fixing index 7

> - Fixing index 8

> - Fixing index 9

> - Fixing index 10

> - Fixing index 11

> Found wrong record at 687081

> 

> E:\MySQLData\5.0.6\sdidw>

> 

> __

> Switch to Netscape Internet Service.

> As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register

> 

> Netscape. Just the Net You Need.

> 

> New! Netscape Toolbar for Internet Explorer

> Search from anywhere on the Web and block those annoying pop-ups.

> Download now at http://channels.netscape.com/ns/search/install.jsp

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld error [Can't start server: can't create PID file: No such file or directory]

2005-07-01 Thread Mir Islam
Make sure /var/run/mysqld exists and wrtable by mysql user



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Primary Key question

2005-07-01 Thread Haisam K. Ido
so if I do want 'name' to be unique I must not make it primary, just 
simply unique, since my primary key is for id and name simultaneously.


[EMAIL PROTECTED] wrote:

your primary key is based on your (auto-increment) id and the name,

 PRIMARY KEY  (`id`,`name`)

so your two entries would be:

  1,winxp
  2,winxp

  
there's no key conflict/duplication there.


by the way, you do realize what the max range is on the (signed) tinyint
(for your "id"), correct?

 
 Original Message 



Date: Friday, July 01, 2005 10:04:01 AM -0400
From: "Haisam K. Ido" <[EMAIL PROTECTED]>
To: mysql@lists.mysql.com 
Subject: Primary Key question



I've created the following table (server 4.1 in win2k)

CREATE TABLE `os` (
  `id` tinyint(10) NOT NULL auto_increment,
  `name` varchar(255) NOT NULL default '',
  `description` varchar(255) default NULL,
  PRIMARY KEY  (`id`,`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

and was very surprised that I can do the following twice.  Should'nt
this be rejected since name is a primary key ad has already been used?

INSERT INTO os (name,description) VALUES ( 'winxp','winxp');


--



-- End Original Message --


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Compatibility with ActiveState Perl 5.8.4+?

2005-07-01 Thread SGreen
"Siegfried Heintze" <[EMAIL PROTECTED]> wrote on 06/30/2005 10:29:48 
PM:

> Approximately a half year ago I started to install bugzilla on windows 
which
> uses mysql and perl.
> 
> After much grief, I discovered that there was a problem with windows 
perl
> and the latest version of mysql at the time. I finally solved the 
problem by
> rolling back to 4.0.23.
> 
> It looks like there are some nice features in 4.1 but they won't work 
for
> me, unless someone has fixed the perl DBI interface.
> 
> Has anyone tried the newer versions with perl DBI on windows? How does 
v5
> work, for example?
> 
> I'm reluctant to try it out myself because I have applications using 
mysql
> and I would not want to break them by installing v5.
> 
> Thanks,
> Siegfried
> 

The problem with using the older PERL library with a MySQL server 4.1+ 
isn't in the communications interface, it's with the authentication. You 
can still use older clients (using the smaller and weaker password hashes) 
with the newer MySQL databases just fine. You have 3 options to make them 
compatible:

A) Upgrade your PERL library (which you said you cannot do yet)
B) Tell the server to ALWAYS use the older password hashes. You do this 
with the --old-password option
C) Manually recreate the password hash for the older client's `user` 
account by UPDATE-ing the `mysql`.`user` table using the OLD_PASSWORD() 
function. Don't forget to FLUSH PRIVILEGES after any manual manipulation 
of the tables in the `mysql` database so that your changes will be 
recognized by the server.

These methods are described in more detail here:
http://dev.mysql.com/doc/mysql/en/old-client.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: View onto tables w/ identical key names

2005-07-01 Thread Martijn Tonies


> >When creating the view, use
> >CREATE VIEW viewname ( column names )
> >AS
> >select [fields]
> >
> >Instead of "*". This way, you can only return a single "mat" column
instead
> >of having that column twice in the view.
> >
>
>
> i'll try that programmatically, no way am i typing close to 1000 field
> names into the query browser   ;-)

Get yourself a GUI tool with which you can simply drag if a list of fields
:-)

I know one ... Database Workbench - www.upscene.com ;-)

With regards,

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle & MS SQL
Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View onto tables w/ identical key names

2005-07-01 Thread Les Schaffer

Martijn Tonies wrote:


When creating the view, use
CREATE VIEW viewname ( column names )
AS
select [fields]

Instead of "*". This way, you can only return a single "mat" column instead
of having that column twice in the view.




i'll try that programmatically, no way am i typing close to 1000 field 
names into the query browser   ;-)


thanks

les schaffer



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible to delay index writes until server is less busy?

2005-07-01 Thread Dan Nelson
In the last episode (Jul 01), Mathias said:
> Dan Nelson wrote:
> >In the last episode (Jun 30), Mathias said:
> >>We've been benchmarking a database that in real-life will have a
> >>huge write load (max peak load 1 inserts/second) to the same
> >>table (MyISAM).
> >>
> >>We will need about 4 indexes for that table. However, from our
> >>benchmark tests, it is clear that writing indexes takes too many
> >>resources and impedes the speed of inserting new records.
> >>
> >>To overcome this, we are thinking of:
> >>1 -  using several smaller tables (instead of one big one) by creating 
> >>and writing to a new table every x hours,
> >>2 -  wait with writing the indexes until a new table has been created 
> >>where the next inserts will be (i.e, not write indexes until the table 
> >>has been closed)
> >
> > You want the delay_key_write flag.  You can set it per-table, or
> > globally.  You can use the "FLUSH TABLE mytable" command to force
> > mysql to update the on-disk copy of the indexes.
> 
> Yes, that is something we are considering doing. Any suggestions
> though how to best decide when to do this? As far as we know, there
> is no way of determinining from within MySQL whether it is very busy
> or not. I guess we need to determine that externaly before running
> the queries

If you don't mind the extra time required to check/repair damaged
indexes after a system crash, you don't really need to flush at all.

Or, if you aren't using hardware raid, consider getting one with
battery-backed RAM (which will let it cache writes).  That way your
index writes will return immediately even with delay_key_write unset,
and the raid card will flush to disk at its leisure.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Primary Key question

2005-07-01 Thread Alec . Cawley
"Haisam K. Ido" <[EMAIL PROTECTED]> wrote on 01/07/2005 15:04:01:

> 
> I've created the following table (server 4.1 in win2k)
> 
> CREATE TABLE `os` (
>`id` tinyint(10) NOT NULL auto_increment,
>`name` varchar(255) NOT NULL default '',
>`description` varchar(255) default NULL,
>PRIMARY KEY  (`id`,`name`)
> ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
> 
> and was very surprised that I can do the following twice.  Should'nt 
> this be rejected since name is a primary key ad has already been used?
> 
> INSERT INTO os (name,description) VALUES ( 'winxp','winxp');

No. What you have requested is that the combination of id AND name be 
unique. Since id is auto-increment, every record will be unique unless you 
manually force the id to an old value. I guess you want the values to be 
separately unique, in which case you want
PRIMARY KEY (id), UNIQUE (name) 

Alec



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Primary Key question

2005-07-01 Thread Haisam K. Ido


I've created the following table (server 4.1 in win2k)

CREATE TABLE `os` (
  `id` tinyint(10) NOT NULL auto_increment,
  `name` varchar(255) NOT NULL default '',
  `description` varchar(255) default NULL,
  PRIMARY KEY  (`id`,`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

and was very surprised that I can do the following twice.  Should'nt 
this be rejected since name is a primary key ad has already been used?


INSERT INTO os (name,description) VALUES ( 'winxp','winxp');


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible to delay index writes until server is less busy?

2005-07-01 Thread Mathias



Write to a memory table first then do a hotcopy on a scheduled basis.


Do you mean converting the memory table into MyISAM on a scheduled 
basis? (mysqlhotcopy only works on MyISAM and ISAM tables). Or is there 
a faster way of storing the table to disk?


 - Mathias


We've been benchmarking a database that in real-life will have a huge 
write load (max peak load 1 inserts/second) to the same table 
(MyISAM).


We will need about 4 indexes for that table. However, from our 
benchmark tests, it is clear that writing indexes takes too many 
resources and impedes the speed of inserting new records.


To overcome this, we are thinking of:
1 -  using several smaller tables (instead of one big one) by 
creating and writing to a new table every x hours,
2 -  wait with writing the indexes until a new table has been created 
where the next inserts will be (i.e, not write indexes until the 
table has been closed)


The biggest problem now is if the indexes are created when the server 
is very busy. If there was a way of telling MySQL to delay creating 
the indexes when it is busy, then a big obstacle would be out of the 
way.


Is this possible? We could not find anything in the MySQL 
documentation concerning this.


Any suggestions would be greatly appreciated.

Kind regards,

Mathias





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: load data infile

2005-07-01 Thread Martijn van den Burg
Hi, 


> I have a file where the data looks like this:
> 
> "1","23","345","45";
> "34","4","444","1er";
> 
> I am then trying to load that data using the load data local 
> infile and this statement:
> 
> Load data local infile '/httpd/htdocs/sql/loader.sql' into 
> table vehicles fields terminated by ',' enclosed by "'" lines 
> terminated by ';'

I guess that should be:

Load data local infile '/httpd/htdocs/sql/loader.sql' into table
vehicles fields terminated by ',' enclosed by '"' lines terminated by
'X';  Where 'X' is '\n' when you're on a UNIX/Linux system, '\r' when on
Windows, or '\r\n' when on MacOS.

HTH.

--
Martijn


-- 
The information contained in this communication and any attachments is 
confidential and may be privileged, and is for the sole use of the intended 
recipient(s). Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please notify the sender 
immediately by replying to this message and destroy all copies of this message 
and any attachments. ASML is neither liable for the proper and complete 
transmission of the information contained in this communication, nor for any 
delay in its receipt.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible to delay index writes until server is less busy?

2005-07-01 Thread Mathias

Dan Nelson wrote:


In the last episode (Jun 30), Mathias said:
 


We've been benchmarking a database that in real-life will have a huge
write load (max peak load 1 inserts/second) to the same table
(MyISAM).

We will need about 4 indexes for that table. However, from our
benchmark tests, it is clear that writing indexes takes too many
resources and impedes the speed of inserting new records.

To overcome this, we are thinking of:
1 -  using several smaller tables (instead of one big one) by creating 
and writing to a new table every x hours,
2 -  wait with writing the indexes until a new table has been created 
where the next inserts will be (i.e, not write indexes until the table 
has been closed)
   



You want the delay_key_write flag.  You can set it per-table, or
globally.  You can use the "FLUSH TABLE mytable" command to force mysql
to update the on-disk copy of the indexes.

http://dev.mysql.com/doc/mysql/en/create-table.html
http://dev.mysql.com/doc/mysql/en/myisam-start.html
http://dev.mysql.com/doc/mysql/en/flush.html
 



Yes, that is something we are considering doing. Any suggestions though 
how to best decide when to do this? As far as we know, there is no way 
of determinining from within MySQL whether it is very busy or not. I 
guess we need to determine that externaly before  running the queries


 - Mathias


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Possible to delay index writes until server is less busy?

2005-07-01 Thread Mathias

[EMAIL PROTECTED] wrote:


Write to a memory table first then do a hotcopy on a scheduled basis.


I'll look into that. Thanks for your reply.

 - Mathias







- Original Message - From: "Mathias" 
<[EMAIL PROTECTED]>

To: 
Sent: Thursday, June 30, 2005 9:10 AM
Subject: Possible to delay index writes until server is less busy?


We've been benchmarking a database that in real-life will have a huge 
write load (max peak load 1 inserts/second) to the same table 
(MyISAM).


We will need about 4 indexes for that table. However, from our 
benchmark tests, it is clear that writing indexes takes too many 
resources and impedes the speed of inserting new records.


To overcome this, we are thinking of:
1 -  using several smaller tables (instead of one big one) by 
creating and writing to a new table every x hours,
2 -  wait with writing the indexes until a new table has been created 
where the next inserts will be (i.e, not write indexes until the 
table has been closed)


The biggest problem now is if the indexes are created when the server 
is very busy. If there was a way of telling MySQL to delay creating 
the indexes when it is busy, then a big obstacle would be out of the 
way.


Is this possible? We could not find anything in the MySQL 
documentation concerning this.


Any suggestions would be greatly appreciated.

Kind regards,

Mathias


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: View onto tables w/ identical key names

2005-07-01 Thread Martijn Tonies
Les,


> i am working on a database system where a number of tables (5-10) each
> with possibly hundreds of columns share an identical primary key name.
> the truth is if it weren't for a limitation in the number of columns in
> M$ Access (long story: we're creating a client server scheme so they
> can get their Access DB for data analysis purposes post data
> collection)) these would be all one table.
>
> since i need to use MySQL tables with a similar structure (long story),
> i would like to join these tables at the hip, so to speak, with a VIEW.
> however when i try to create said VIEW from a select/join, MySQL balks
> because the primary key name is the same in each table:
>
>   select * from id left join dia USING (mat) left join msa USING (mat)
> left join sq USING (mat)

When creating the view, use
CREATE VIEW viewname ( column names )
AS
select [fields]

Instead of "*". This way, you can only return a single "mat" column instead
of having that column twice in the view.

With regards,

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle & MS SQL
Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Update query to set random data

2005-07-01 Thread zzapper
Hi,
In order to generate some useful test data

eg randomnly setting the flag select_normal to 0 or 1 use the following

update tbl_products set select_normal=round(rand());

I've actually answered my own question (as usual in formulating the question 
you often can workout
the answer)

But anyone got any other useful queries in this line?

-- 
zzapper
vim -c ":%s%s*%Cyrnfr)fcbafbe[Oenz(Zbbyranne%|:%s)[[()])-)Ig|norm Vg?"
http://www.rayninfo.co.uk/tips/ vim, zsh & success tips


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]