Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB

2005-05-08 Thread Richard Dale
A new server is about to arrive here and will have have 8x15K RPM spindles,
dual Opteron processors and 4GB of RAM, and will have around 100GB of
database (primarily stock market prices) - the SCSI controller will also
have battery-backed RAM too.  InnoDB will be used exclusively.

I've searched the list and seen varying reports of which Linux kernels work
best etc.

I'd be intersted to know the following:
a) Which 64 bit Linux distributions are good for the task?
b) Which 64 bit Linux distributions are bad for the task?
c) Any comments on kernels, particularly with regard to 64 bit support and
schedulers?  Any problems with the latest kernels (2.6.11 & 2.6.12-rcX)?
d) Any recommendations for RAID volume setup
e) Any MySQL optimisations for multiple spindles, onboard caching, stripe
sizes, RAID5 vs RAID10.
f) Any MySQL reliability settings to take into account the battery-backed
RAM on the RAID controller?

I'm happy to collate the responses into a summary too.

I'm aware of the following discussions which describes a reasonably grunty
Dual AMD system with a similar configuration to mine:
http://meta.wikimedia.org/wiki/Hardware_ordered_April_2005
http://meta.wikimedia.org/wiki/Hardware_order_May_2004

Best regards,
Richard Dale.
Norgate Investor Services
- Premium quality Stock, Futures and Foreign Exchange Data for
  markets in Australia, Asia, Canada, Europe, UK & USA -
www.premiumdata.net



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migrating Database

2005-05-08 Thread pdutta
MySQL data files are binary compatible from one machine to another. 
Even InnoDB
data files.  As long as you are not trying to "upgrade" from version 3.23 to
version 4.1, you are okay with copying the data files from one machine to
another.

--
Partha Dutta
Sr. Consultant, MySQL Inc.
Quoting Chris Knipe <[EMAIL PROTECTED]>:
Temporary replication comes to mind fs hot copy as well (maybe)
--
Chris.
I love deadlines. I especially love the whooshing sound they make as 
they fly by..." - Douglas Adams, 'Hitchhiker's Guide to the Galaxy'

- Original Message - From: "Brian Erickson" 
<[EMAIL PROTECTED]>
To: 
Sent: Sunday, May 08, 2005 10:47 PM
Subject: Migrating Database

Greetings all,
We are migrating our web site from one server to another and are
having some issues with transferring our MySQL database. Here's the
process we're currently using:
1) run mysqldump for each table in our database (i.e. one file for 
each table)
2) compress each file using gzip
3) transfer the files to our new server
4) decompress
5) import each table using:
mysql [dbname] < [filename]

Here's the problem: several of our tables have over 20 million rows,
and this import process is taking well over 6 hours. Obviously, our
site cannot go live until the database is fully imported, and this is
much too long for us to be down.
The two possible solutions I've researched are:
1) Copy data files directly. This concerns me because of possible
version incompatibilities.
2) Using LOAD DATA commands. I'm not familiar with these at all, and
frankly, not real sure how they work.
Can anyone offer us some advice as to the easiest way we can
accomplish this, whether it's one of the above solutions, or another
one completely?
Thanks in advance!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How to handle date and time conversions

2005-05-08 Thread Rhino
I'm not sure what you mean by date *conversions* but the date *functions*
can be found here:
http://dev.mysql.com/doc/mysql/en/date-and-time-functions.html. You will
probably use these for most conversions you attempt.

Rhino

- Original Message - 
From: "Robert A. Rawlinson" <[EMAIL PROTECTED]>
To: 
Sent: Sunday, May 08, 2005 8:17 AM
Subject: How to handle date and time conversions


> I have read somewhere how to do the conversions but for some reason I
> can not find it again. Now I need to do conversions soon. Could someone
> direct me to where the documentation is located?
> Thanks for any help you can offer.
> Bob Rawlinson
>
> -- 
> Robert A. Rawlinson
> Felicity Ohio 45120
>
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> -- 
> No virus found in this incoming message.
> Checked by AVG Anti-Virus.
> Version: 7.0.308 / Virus Database: 266.11.6 - Release Date: 06/05/2005
>
>



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.6 - Release Date: 06/05/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migrating Database

2005-05-08 Thread Ryan J. Cavicchioni
Brian Erickson wrote:

>Greetings all,
>
>We are migrating our web site from one server to another and are
>having some issues with transferring our MySQL database. Here's the
>process we're currently using:
>
>1) run mysqldump for each table in our database (i.e. one file for each table)
>2) compress each file using gzip
>3) transfer the files to our new server
>4) decompress
>5) import each table using:
>mysql [dbname] < [filename]
>
>Here's the problem: several of our tables have over 20 million rows,
>and this import process is taking well over 6 hours. Obviously, our
>site cannot go live until the database is fully imported, and this is
>much too long for us to be down.
>
>The two possible solutions I've researched are:
>
>1) Copy data files directly. This concerns me because of possible
>version incompatibilities.
>2) Using LOAD DATA commands. I'm not familiar with these at all, and
>frankly, not real sure how they work.
>
>Can anyone offer us some advice as to the easiest way we can
>accomplish this, whether it's one of the above solutions, or another
>one completely?
>
>Thanks in advance!
>
>  
>
If you could bring both database servers down, you could tar up the data
directory and untar it on the other server.

-- 
Ryan Cavicchioni

GPG ID: C271BCA8
GPG Public Key: http://confabulator.net/gpg/ryan.asc
GPG Fingerprint: 83E4 2495 6194 0F66 ED85 22B4 4CC0 DA01 C271 BCA8


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



compiling mysql on macosx 10.4

2005-05-08 Thread Terry Richards
does anybelly know why the following happens?
i am using macosx 10.4
and ./configure --prefix=/usr/local/mysql 
--with-unix-socket-path=/usr/local/mysql/run/mysql_socket 
--with-mysqld-user=mysql --with-comment --with-debug


sql_list.h: At global scope:
sql_list.h:401: warning: inline function `base_ilist::~base_ilist()' 
used but never defined
if g++ -DMYSQL_SERVER -DDEFAULT_MYSQL_HOME="\"/usr/local\"" 
-DDATADIR="\"/usr/local/var\"" -DSHAREDIR="\"/usr/local/share/mysql\"" 
-DHAVE_CONFIG_H -I. -I. -I.. -I../innobase/include -I../include 
-I../regex -I. -O3 -DDBUG_OFF-fno-implicit-templates 
-fno-exceptions -fno-rtti -DHAVE_DARWIN_THREADS -D_P1003_1B_VISIBLE 
-DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ 
-DIGNORE_SIGHUP_SIGQUIT -MT mysqld.o -MD -MP -MF ".deps/mysqld.Tpo" -c 
-o mysqld.o mysqld.cc; \
then mv -f ".deps/mysqld.Tpo" ".deps/mysqld.Po"; else rm -f 
".deps/mysqld.Tpo"; exit 1; fi
opt_range.h: In member function `bool SQL_SELECT::check_quick(THD*, 
bool, ha_rows)':
opt_range.h:146: warning: passing negative value '-0x1' 
for converting 1 of 'Bitmap<64u>::Bitmap(uint)'
mysqld.cc: In function `int bootstrap(FILE*)':
mysqld.cc:3350: warning: converting of negative value 
'-0x1' to 'ulong'
mysqld.cc: In function `void* handle_connections_sockets(void*)':
mysqld.cc:3589: error: invalid conversion from 'size_socket*' to 
'socklen_t*'
mysqld.cc:3589: error:   initializing argument 3 of 'int accept(int, 
sockaddr*, socklen_t*)'
mysqld.cc:3662: error: invalid conversion from 'size_socket*' to 
'socklen_t*'
mysqld.cc:3662: error:   initializing argument 3 of 'int 
getsockname(int, sockaddr*, socklen_t*)'
sql_list.h: At global scope:
sql_list.h:401: warning: inline function `base_ilist::~base_ilist()' 
used but never defined
make[4]: *** [mysqld.o] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [all] Error 2
make[1]: *** [all-recursive] Error 1

:-)2
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Migrating Database

2005-05-08 Thread Chris Knipe
Temporary replication comes to mind fs hot copy as well (maybe)
--
Chris.
I love deadlines. I especially love the whooshing sound they make as they 
fly by..." - Douglas Adams, 'Hitchhiker's Guide to the Galaxy'

- Original Message - 
From: "Brian Erickson" <[EMAIL PROTECTED]>
To: 
Sent: Sunday, May 08, 2005 10:47 PM
Subject: Migrating Database

Greetings all,
We are migrating our web site from one server to another and are
having some issues with transferring our MySQL database. Here's the
process we're currently using:
1) run mysqldump for each table in our database (i.e. one file for each 
table)
2) compress each file using gzip
3) transfer the files to our new server
4) decompress
5) import each table using:
mysql [dbname] < [filename]

Here's the problem: several of our tables have over 20 million rows,
and this import process is taking well over 6 hours. Obviously, our
site cannot go live until the database is fully imported, and this is
much too long for us to be down.
The two possible solutions I've researched are:
1) Copy data files directly. This concerns me because of possible
version incompatibilities.
2) Using LOAD DATA commands. I'm not familiar with these at all, and
frankly, not real sure how they work.
Can anyone offer us some advice as to the easiest way we can
accomplish this, whether it's one of the above solutions, or another
one completely?
Thanks in advance!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Migrating Database

2005-05-08 Thread Brian Erickson
Greetings all,

We are migrating our web site from one server to another and are
having some issues with transferring our MySQL database. Here's the
process we're currently using:

1) run mysqldump for each table in our database (i.e. one file for each table)
2) compress each file using gzip
3) transfer the files to our new server
4) decompress
5) import each table using:
mysql [dbname] < [filename]

Here's the problem: several of our tables have over 20 million rows,
and this import process is taking well over 6 hours. Obviously, our
site cannot go live until the database is fully imported, and this is
much too long for us to be down.

The two possible solutions I've researched are:

1) Copy data files directly. This concerns me because of possible
version incompatibilities.
2) Using LOAD DATA commands. I'm not familiar with these at all, and
frankly, not real sure how they work.

Can anyone offer us some advice as to the easiest way we can
accomplish this, whether it's one of the above solutions, or another
one completely?

Thanks in advance!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error Connecting To Server From Client Machine-Linux

2005-05-08 Thread Andy Pieters
Hi Mark

You should know that users and passwords are specific to the host where they 
are being used.

For instance, if your root password on the server is mypass, and use it to 
connect to the mysql server, then it will get accepted.

Now if you use a mysql client on another machine to connect to the server and 
use the root account with password mypass then it will get rejected.

Check out the mysql documentation on the GRANT command.  You will see that it 
includes a hostname

Like GRANT ALL on database TO 'user'@'host` IDENTIFIED BY password

See http://dev.mysql.com/doc/mysql/en/grant.html

Kind regards


Andy

-- 
Registered Linux User Number 379093
-- --BEGIN GEEK CODE BLOCK-
Version: 3.1
GAT/O/>E$ d-(---)>+ s:(+)>: a--(-)>? C$(+++) UL>$ P-(+)>++
L+++>$ E---(-)@ W+++>+++$ !N@ o? !K? W--(---) !O !M- V-- PS++(+++)
PE--(-) Y+ PGP++(+++) t+(++) 5-- X++ R*(+)@ !tv b-() DI(+) D+(+++) G(+)
e>$@ h++(*) r-->++ y--()>
-- ---END GEEK CODE BLOCK--
--
Check out these few php utilities that I released
 under the GPL2 and that are meant for use with a 
 php cli binary:
 
 http://www.vlaamse-kern.com/sas/
--

--


pgpF6JKRYDQh0.pgp
Description: PGP signature


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
Harrison Fisk wrote:
There isn't really any way to "use" concurrent INSERT.  It happens 
automatically if possible.  However there are a few things you can do 
to help it along, such as OPTIMIZE after you DELETE large portions of 
the table.  Also it does have to enabled in LOAD DATA INFILE 
manually.  However, regular INSERTs and SELECTs should do this 
automatically.
Well... in order to "use" concurrent insert you need to prevent tables 
with deleted data.   So  the developer just needs to make sure they're 
always in this situation.

Why do you think this is your bottleneck?  Have you measured anything 
quantitatively?  What is your table_locks_immediate vs. your 
table_locks_waited?  What is your rate of INSERTs?
This is a good idea actually.  I took a look at these values and only 
3.6% of our queries wait for locks.

Of course the problem might be that these locks take up a lot of time in 
critical places.  It would be nice to see these values broken down into 
seconds waited. but this is probably too much to ask for just yet ;)

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Harrison Fisk
Hi,
On May 8, 2005, at 1:47 PM, Kevin Burton wrote:
Harrison Fisk wrote:
aren't loaded into the query cache, they are loaded into the key 
cache (key_buffer_size).
Yes... you busted me ! :). I meant to say key cache though.
Alright, I assumed a typo or such.
Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory.
In this config will the SELECTs block for the INSERTs?  I guess they 
would!

Yes.  If MySQL has to actually read the table or indexes then it will 
set locks to do so, as appropriate to the storage engine in use.  
Keep in mind, if everything is coming from cache (key_buffer + disk 
buffer), the lock will generally be very quick as there isn't any 
disk i/o to block on.

Of course... I realize.  But what if its blocked by 5-10 INSERTs.  
Then its going to have to WAIT for these INSERTs to complete even 
though it can resolve the query without waiting for the table :-/
It is waiting for the table, it isn't a matter of waiting for disk i/o, 
but instead making sure it doesn't read anything in an inconsistent 
state.  Every operation has to be atomic and make sure not to read 
half-written information.  Also concurrent insert would prevent this 
from happening as well.

This type of scenario would yield dramatic performance imporovements 
by migrating to INNODB... would it not?
Either that or there's the option of using MyISAM with no DELETEd 
rows (since it can then support concurrent insert.)

Concurrent insert sounds like it would work well with your above 
mythical application (since you didn't mention any DELETEs).  The 
only locking conflict you might have would be that your INSERT's 
would lock other INSERT's while it is occuring.
Yes... thats my current thinking.  That our INSERTs are blocking 
SELECTs even if they can complete without hitting disk.  Now we need 
to find out if we can use the concurrent select feature of myisam 
without migrating to INNODB.  Its harder to migrate to innodb right 
now.
There isn't really any way to "use" concurrent INSERT.  It happens 
automatically if possible.  However there are a few things you can do 
to help it along, such as OPTIMIZE after you DELETE large portions of 
the table.  Also it does have to enabled in LOAD DATA INFILE manually.  
However, regular INSERTs and SELECTs should do this automatically.

Why do you think this is your bottleneck?  Have you measured anything 
quantitatively?  What is your table_locks_immediate vs. your 
table_locks_waited?  What is your rate of INSERTs?

Regards,
Harrison
--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com
Get a jumpstart on MySQL Cluster -- 
http://www.mysql.com/consulting/packaged/cluster.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
Harrison Fisk wrote:
aren't loaded into the query cache, they are loaded into the key cache 
(key_buffer_size).
Yes... you busted me ! :). I meant to say key cache though. 

Now assuming that you have the query cache actually being used (the 
cache of the actual statement), then normally the SELECT won't wait 
for any locks.  So someone can have a WRITE lock on the table, and 
your SELECT will still run.  If you have a query cache miss, then it 
will need to acquire the READ lock like a normal SELECT.

Yes... I realize.  The issue is though that only a small percentage of 
our queries are actually using the query cache. 

Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory.
In this config will the SELECTs block for the INSERTs?  I guess they 
would!

Yes.  If MySQL has to actually read the table or indexes then it will 
set locks to do so, as appropriate to the storage engine in use.  Keep 
in mind, if everything is coming from cache (key_buffer + disk 
buffer), the lock will generally be very quick as there isn't any disk 
i/o to block on.

Of course... I realize.  But what if its blocked by 5-10 INSERTs.  Then 
its going to have to WAIT for these INSERTs to complete even though it 
can resolve the query without waiting for the table :-/

This type of scenario would yield dramatic performance imporovements 
by migrating to INNODB... would it not?
Either that or there's the option of using MyISAM with no DELETEd 
rows (since it can then support concurrent insert.)

Concurrent insert sounds like it would work well with your above 
mythical application (since you didn't mention any DELETEs).  The only 
locking conflict you might have would be that your INSERT's would lock 
other INSERT's while it is occuring.
Yes... thats my current thinking.  That our INSERTs are blocking SELECTs 
even if they can complete without hitting disk.  Now we need to find out 
if we can use the concurrent select feature of myisam without migrating 
to INNODB.  Its harder to migrate to innodb right now.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Harrison Fisk
Hi Kevin,
On May 8, 2005, at 1:07 PM, Kevin Burton wrote:
OK.
Lets take a mythical application.  The app is spending about 50% of 
its time inserting into table FOO.  The other 50% of the time its 
spent doing SELECT against the table.

The SELECTs can use an index which is already full loaded into the 
query cache.
Indexes aren't loaded into the query cache, they are loaded into the 
key cache (key_buffer_size).

Now assuming that you have the query cache actually being used (the 
cache of the actual statement), then normally the SELECT won't wait for 
any locks.  So someone can have a WRITE lock on the table, and your 
SELECT will still run.  If you have a query cache miss, then it will 
need to acquire the READ lock like a normal SELECT.

Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory.
In this config will the SELECTs block for the INSERTs?  I guess they 
would!
Yes.  If MySQL has to actually read the table or indexes then it will 
set locks to do so, as appropriate to the storage engine in use.  Keep 
in mind, if everything is coming from cache (key_buffer + disk buffer), 
the lock will generally be very quick as there isn't any disk i/o to 
block on.

This type of scenario would yield dramatic performance imporovements 
by migrating to INNODB... would it not?
Either that or there's the option of using MyISAM with no DELETEd rows 
(since it can then support concurrent insert.)
Concurrent insert sounds like it would work well with your above 
mythical application (since you didn't mention any DELETEs).  The only 
locking conflict you might have would be that your INSERT's would lock 
other INSERT's while it is occuring.

Regards,
Harrison
--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com
Get a jumpstart on MySQL Cluster -- 
http://www.mysql.com/consulting/packaged/cluster.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
OK.
Lets take a mythical application.  The app is spending about 50% of its 
time inserting into table FOO.  The other 50% of the time its spent 
doing SELECT against the table.

The SELECTs can use an index which is already full loaded into the query 
cache. Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory. 

In this config will the SELECTs block for the INSERTs?  I guess they would!
This type of scenario would yield dramatic performance imporovements by 
migrating to INNODB... would it not? 

Either that or there's the option of using MyISAM with no DELETEd rows 
(since it can then support concurrent insert.)

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Error Connecting To Server From Client Machine-Linux

2005-05-08 Thread Mark Sargent
Hi All,
can ping the server from the client. Am seeing the attempted phpmyadmin 
connection from the client via the firewall alert. But, get an error 
saying that the client is not allowed to connect to the mysql server. 
I'm guessing it's a setting in conf.inc.php..?

$cfg['Servers'][$i]['host'] = '192.168.1.1';
$cfg['Servers'][$i]['user'] = 'root';
$cfg['Servers'][$i]['password'] = '???';
I know that the password is correct as connection from phpmyadmin on 
localhost is fine on the server.
server(Fedora3)=192.168.1.1
client(Fedora3)=192.168.1.2
Something I've missed/not considered..? Is it mysql related..? Still a 
newb, so I'm sure it's somethiing real easy. Cheers.

Mark Sargent.
phpMyAdmin tried to connect to the MySQL server, and the server rejected 
the connection. You should check the host, username and password in 
config.inc.php and make sure that they correspond to the information 
given by the administrator of the MySQL server.

*MySQL said: *Documentation 


| #1045 - Access denied for user 'root'@'192.168.1.2' (using password: YES)|
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


How to handle date and time conversions

2005-05-08 Thread Robert A. Rawlinson
I have read somewhere how to do the conversions but for some reason I 
can not find it again. Now I need to do conversions soon. Could someone 
direct me to where the documentation is located?
Thanks for any help you can offer.
Bob Rawlinson

--
Robert A. Rawlinson
Felicity Ohio 45120

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


PArtial help needed

2005-05-08 Thread Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem
I was able to set up a table no problem.

On one of the blobs, it is coming up on the web page like a mess
and not like an interpreted object
as one would expect.

Using Mysql 3.23.58 with php 4.3.9

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Count of multiple columns

2005-05-08 Thread Rhino
As you have already seen, it is pretty difficult trying to count things in
multiple columns. Although it is probably possible to do the counting, it
might require programming logic in order to count for specific values in the
various columns and then store the subtotals for each column so that they
can be added together for a final total.

A better solution is to redesign your table. You clearly already sense that
it is not an ideal design and you are right. Your data is not properly
normalized. It is not even in First Normal Form because it contains a
repeating group.

Most professional table designers use a process called Normalization to make
sure that their designs avoid the most common problems found in bad designs.
If you Google on "normalization" you can probably find many tutorials on
this subject; this is just one that I've seen which is reasonably good:
http://www.informit.com/articles/article.asp?p=30885&rl=1. I'll let you read
this tutorial - or another one that suits you more - on your own.

In the meantime, let me suggest that your table should look like this:

Description - varchar(100)
Category - varchar(30)
primary key(Description, Category)

You really don't need the ID column: it doesn't add anything useful to the
data and the Description is much more useful. The Primary Key clause is very
important. It must contain both the Description and the Category: if it
contained only the Description, then you would be limited to only one
Category for each Description. If you add Category to the Primary Key, you
can now have as many categories as you like for each Description but you can
be sure that any given Description can only have a specific Category once.
In other words, it is not a problem to have *one* row that for Description
'aaa' that says the Category is 'Food' but the primary key will ensure that
you can never have MORE than one row with a Category of 'Food' for
description 'aaa'.

Then, the data will look like this:

DescriptionCategory
aaa   Food
aaa   America
aaa   Cheese
bbb   Drink
bbb   America
bbb   Wines
cccWines
cccDrink
ddd   America
ddd   Food

If you want to find the number of rows containing 'America':

select count(*)
from mytable
where Category = 'America'

To get the number of rows containing 'Wines', just replace 'America' with
'Wines' in the query.


Rhino

- Original Message - 
From: "Micha Berdichevsky" <[EMAIL PROTECTED]>
To: 
Sent: Sunday, May 08, 2005 2:07 AM
Subject: Count of multiple columns


> Hi.
> I have a table that store different items. For each items, I can attach
> up to 5 different textual categories.
> Those categories are free-text, and different columns can have the same
> values (example below).
> I am trying to select the count of each of those categories, regardless
> of it's position.
> The table looks like:
> ID - int(11) auto_increment,
> Description - varchar(100),
> Cat1 - varchar(30),
> Cat2 - varchar(30),
> Cat3 - varchar(30),
> Cat4 - varchar(30),
> Cat5 - varchar(30) ... (etc).
>
> Sample data may be:
> 1, "aaa", "Food", "America", "Cheese", NULL, NULL
> 2, "bbb", "Drink", "America", "Wines", NULL, NULL
> 3, "ccc", "Wines", "Drink", NULL, NULL, NULL
> 4, "ddd", "America", "Food", NULL, NULL
>
> The result I want is
> Food - 2
> America - 3
> Drink - 2
> Wines - 2
> Cheese - 1
>
> Hope you guys can help (BTW, I'm not too happy with the way the table is
> designed, but couldn't think of a better way).
> Thanks, Micha.
>
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> -- 
> No virus found in this incoming message.
> Checked by AVG Anti-Virus.
> Version: 7.0.308 / Virus Database: 266.11.6 - Release Date: 06/05/2005
>
>



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.6 - Release Date: 06/05/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Count of multiple columns

2005-05-08 Thread mfatene
Selon Micha Berdichevsky <[EMAIL PROTECTED]>:

> Hi.
> I have a table that store different items. For each items, I can attach
> up to 5 different textual categories.
> Those categories are free-text, and different columns can have the same
> values (example below).
> I am trying to select the count of each of those categories, regardless
> of it's position.
> The table looks like:
> ID - int(11) auto_increment,
> Description - varchar(100),
> Cat1 - varchar(30),
> Cat2 - varchar(30),
> Cat3 - varchar(30),
> Cat4 - varchar(30),
> Cat5 - varchar(30) ... (etc).
>
> Sample data may be:
> 1, "aaa", "Food", "America", "Cheese", NULL, NULL
> 2, "bbb", "Drink", "America", "Wines", NULL, NULL
> 3, "ccc", "Wines", "Drink", NULL, NULL, NULL
> 4, "ddd", "America", "Food", NULL, NULL
>
> The result I want is
> Food - 2
> America - 3
> Drink - 2
> Wines - 2
> Cheese - 1
>
> Hope you guys can help (BTW, I'm not too happy with the way the table is
> designed, but couldn't think of a better way).
> Thanks, Micha.
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>

Hi,
You can try something not optimized (table structure :o)) like that :

create table if not exists categories(ID int auto_increment primary key,
Description varchar(100),
Cat1 varchar(30),
Cat2 varchar(30),
Cat3 varchar(30),
Cat4 varchar(30),
Cat5 varchar(30));


insert into categories (Description,cat1,cat2,cat3,cat4,cat5) values(
"aaa", "Food", "America", "Cheese", NULL, NULL),("bbb", "Drink", "America",
"Wines", NULL, NULL),
("ccc", "Wines", "Drink", NULL, NULL, NULL),
("ddd", "America", "Food", NULL, NULL,NULL);

select * from categories;

mysql> select res.val,count(res.val) from (select cat1 as val from categories
union all select cat2 as val from categories) res
group by val;
+-++
| val | count(res.val) |
+-++
| America |  3 |
| Drink   |  2 |
| Food|  2 |
| Wines   |  1 |
+-++
4 rows in set (0.00 sec)

Now just add the "union all" you need.


Mathias



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]