RE: Query problem: UNION in subquery

2006-05-24 Thread Neeraj

Hi Luke..


Try this

SELECT ObjectId FROM 


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId, 
f15.Form15PatientID AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6') 


UNION 


SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId, 
f15.Form15PatientID  AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6'))as abc


Cheers :)


Neeraj Black Bits

-Original Message-
From: Luke [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 24, 2006 9:36 AM
To: mysql@lists.mysql.com
Subject: Query problem: UNION in subquery

Hello! 

I have a problem using UNIONs inside subqueries. I have simplified my 
query to make it more readable/understandable. 

The question is about the right syntax. 

1. 
This works fine /UNION/ 


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId, 
f15.Form15PatientID AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6')) 


UNION 


(SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId, 
f15.Form15PatientID  AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6')) 


2. 
This works fine too /subquery/: 


SELECT ObjectId FROM 


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId, 
f15.Form15PatientID AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6')) AS 
SubTable1; 


3. 
But when I run 12 combined I get in troubles. This is a query draft, 
can't come up with the right syntax: 


SELECT ObjectId FROM 


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId, 
f15.Form15PatientID AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6')) 


UNION 


(SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId, 
f15.Form15PatientID  AS PtId FROM form15 f15 
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6')) 


I tried many combinations and got various syntax errors. Any ideas? 


Thanks, 
Luke 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Noob: Converting to Inner Join

2006-05-24 Thread Graham Anderson
Are there any advantages to converting this 'working' query below to  
use INNER JOIN ?

If so, what would the correct syntax be ?

Many thanks


SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need guidance with perfomance improvement...

2006-05-24 Thread Eugene Kosov
1 - while your current select query is reasonably fast, it likely could 
be faster.  It may be that it's taking 0.9 seconds and feels fast, but 
perhaps it could be 0.09 seconds instead - and that's not something a 
human can really tell the difference on. It's using an indexed column 
(status), but it's not a very good index (no offense).  There are only a 
few possible values in 500,000 records, so the index isn't terribly 
helpful in locating matching records.  If possible, you could re-write 
your perl scripts to do selects and updates using the ID column (primary 
key), which should be faster than the status column alone.  This may be 
enough to alleviate your locking problem; not sure.


My select query works realy fast. It's not even 0.09 sec. It takes less 
than 0,01 sec (client shows 0,00), while UPDATE executes 0.33 sec (if I 
stop all other processes). I know `status` index here isn't very good, 
but I think 'LIMIT 1' should help here to work faster. I can't use here 
primary key because of script architecture. Or is there a way of 
concurrent queue processing, without some kind of status field?



2 - Switch to an InnoDB table, which should allow concurrent SELECT and 
UPDATE commands.  This is a fairly big change if you're not used to 
InnoDB tables, so read up on this first.


I thought It will be notably slower with InnoDB.


3 - Depending on how you use this table, running OPTIMIZE TABLE 
periodically may help performance.  Updates and deletes on a MyISAM 
table can cause a fragmented table and poor performance.  In my last job 
I had a table with many thousands of INSERT and subsequent UPDATE 
operations every day, and it would regularly become fragmented to the 
point where performance was 1/4 of what it should have been.  I started 
running an OPTIMIZE TABLE command every few hours, took a couple of 
seconds each time, and performance stayed good.


Thanks for advice. I think I should I give this a try.. :)



Eugene Kosov wrote:

Hi, list!

I have a little table (about 0.5 milloin records) which is kind of 
queue. My perl script takes records from it one by one and does somes 
actions.


Here's my table structure:

mysql desc queue;
+--+-+--+-+---++ 

| Field| Type| Null | Key | Default 
  | Extra  |
+--+-+--+-+---++ 

| id   | int(10) unsigned|  | PRI | NULL   
| auto_increment |
| status   | tinyint(4)  |  | MUL | 0   
||
| processor_id | int(10) unsigned| YES  | MUL | NULL   
||

... some other data ...
+--+-+--+-+---++ 





This table, of course, has some indecies:

mysql show indexes from queue;
+--++--+--+--+---+-+--++--++-+ 

| Table| Non_unique | Key_name | Seq_in_index | 
Column_name  | Collation | Cardinality | Sub_part | Packed | Null | 
Index_type | Comment |
+--++--+--+--+---+-+--++--++-+ 

| queue|  0 | PRIMARY  |1 | id  | 
A |  522756 | NULL | NULL   |  | BTREE  |   |
| queue|  1 | status   |1 | status  | 
A |   3 | NULL | NULL   |  | BTREE  |   |
| queue|  1 | processor_id |1 | 
processor_id | A |  522756 | NULL | NULL   | YES  | 
BTREE  | |
+--++--+--+--+---+-+--++--++-+ 


3 rows in set (0.00 sec)



Perl scripts runs a query like 'UPDATE queue SET status=1, 
processor_id=XXX WHERE status=0 LIMIT 1'. Whis query shoud (and uses 
as we can see in following EXPLAIN result) `status` index and work 
rather fast. But.. It becomes slower and slower. I don't understand why.



mysql explain select * from certificates where status=0;
++-+--+--+---++-+---+---+-+ 

| id | select_type | table| type | possible_keys | key| 
key_len | ref   | rows  | Extra   |
++-+--+--+---++-+---+---+-+ 

|  1 | SIMPLE  | queue| ref  | status| status |   
1 | const | 52740 | Using where |
++-+--+--+---++-+---+---+-+ 


1 row in set (0.00 sec)


Despite row numbers to scan (in EXPLAIN results) I think query 

Re: Noob: Converting to Inner Join

2006-05-24 Thread Chris Sansom

At 23:17 -0700 23/5/06, Graham Anderson wrote:
Are there any advantages to converting this 'working' query below to 
use INNER JOIN ?

If so, what would the correct syntax be ?

Many thanks


SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30


From my limited knowledge (I'm a relative newbie and open to 
correction!) one syntax would be:


SELECT category.name, page.name, content.title, content.body
FROM category INNER JOIN page INNER JOIN content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30

...ie., simply replace your commas with 'INNER JOIN'. Or you could do this:

SELECT category.name, page.name, content.title, content.body
FROM category
INNER JOIN page ON page.category_id = category.id
INNER JOIN content ON content.page_id = page.id
WHERE category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30

As for advantages, I'm not sure there are any for this particular 
query. The advantages would arise if you were to combine it with 
different JOINs, eg LEFT JOIN, because (in MySQL 5.x anyway) 'INNER 
JOIN' has a higher syntactical priority than the comma, which is the 
lowest priority of all. In other words, if you were to put a LEFT 
JOIN after your comma joins, MySQL would try to execute the LEFT JOIN 
first, but if you used INNER JOIN, that would be done first.


I think that's about right. :-)

--
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

I wonder who discovered you could get milk from a cow...
and what on _earth_ did he think he was doing?
   -- Billy Connolly

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



[Solved] Urgent problem

2006-05-24 Thread Peter Lauri
It was just to copy the files from the DATA folder in the installation
directory. That was easier then I thought.

-Original Message-
From: Peter Lauri [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 24, 2006 6:52 AM
To: mysql@lists.mysql.com
Subject: Urgent problem

Best group member,

My computer stopped working. And I was able to install Windows on another
partition and now I face one problem. I was running Windows with MySQL 4.1.

I have all C: working, and can access all files.

The first question: Can I recover that data from MySQL?

The second question: How can I do that? (All attempts to repair Windows has
failed, so that option is gone)

I have gone thru the manual but all information there seems to be related to
if the database crashes. And some options are there when you do make
backups, but I do not have the backups.

A desperate soul...

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



errno24 - open_files_limit

2006-05-24 Thread mel list_php

Hi!

We have that huge query against a mysql 4.1 (myisam) to retrieve rows.
The query is built dynamically using a loop, as we are searching in an 
external database built to store a tree with the nested tree model.


When we do this search with a normal world, it's fine: the query is around 
4 lines.
If we search with a very frequent word, we get a lot of hits, and the 
query becomes very long (probably around 300 lines), and we get the errno24


Looking at the doc:
http://dev.mysql.com/doc/refman/4.1/en/not-enough-file-handles.html

so apparently I don't have enough file handlers. Ok, don't mind increasing 
that.They say to modify the open_files_limit variable.


#open_files_limit

The number of files that the operating system allows mysqld to open. This is 
the real value allowed by the system and might be different from the value 
you gave using the --open-files-limit option to mysqld or mysqld_safe. The 
value is 0 on systems where MySQL can't change the number of open files. 
This variable was added in MySQL 3.23.20.


2 questions:
- any idea of the value I should put there? mysqld is running on a machine I 
don't manage, what should I ask for?
- is it possible to modify this value in the conf file and to reload that 
file without stopping/re starting the server?



Additionnally, any pointer on a doc explaining why and when is mysql opening 
a new file?


Thanks,
melanie

_
Are you using the latest version of MSN Messenger? Download MSN Messenger 
7.5 today! http://join.msn.com/messenger/overview



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Urgent problem

2006-05-24 Thread Michael Widenius

Hi!

 Peter == Peter Lauri [EMAIL PROTECTED] writes:

Peter Best group member,
Peter My computer stopped working. And I was able to install Windows on another
Peter partition and now I face one problem. I was running Windows with MySQL 
4.1.

Peter I have all C: working, and can access all files.

Peter The first question: Can I recover that data from MySQL?

Peter The second question: How can I do that? (All attempts to repair Windows 
has
Peter failed, so that option is gone)

Peter I have gone thru the manual but all information there seems to be 
related to
Peter if the database crashes. And some options are there when you do make
Peter backups, but I do not have the backups.

Peter A desperate soul...

In principle you can just copy the tables from one partition/computer to
another and start using them. (The MySQL version doesn't have to
exactly the same, as long as the new one is newer than the old one).

To fix this:

- Install MySQL on your new partition (computer)
- Copy all files in the MySQL data directory (normally C:\mysql\data)
  to your new partition. If the path is different, you may have to
  create a my.cnf file that includes the --basedir and --datadir
  options to your new path.
- Restart MySQL.
- Check/repair your tables by using from the command line:
  C:\mysql\bin\mysqlcheck --all-databases --auto-repair 

Regards,
Monty
CTO of MySQL AB

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [suse-security] fou4s for 10.1?

2006-05-24 Thread Keith Roberts
There is no reason why you cannot use diffeent package 
managers for your system. The only thing is do not run two 
managers at the same time, as they require exclusive access 
to the rpm database.

Trying to use more than one package manager, or multiple 
instances of the same one is not a good idea, may not be 
possible due to locks on the rpm database, or could end up 
corrupting the rpm database.

Smart is still under development, and hopefully the CLI 
version will be fully functional ASAP.

Keith

In theory, theory and practice are the same;
in practice they are not.

On Wed, 24 May 2006, Ingvar Berg wrote:

 To: [EMAIL PROTECTED]
 From: Ingvar Berg [EMAIL PROTECTED]
 Subject: Re: [suse-security] fou4s for 10.1?
 
 Hi,
 
 From my crontab:
 23 4 * * * /usr/bin/smart upgrade -y --update
 
 /Ingvar
 
 [EMAIL PROTECTED] wrote:
  The smart gui seems to be decent are a package manager replacement,
  and
  I'm definitely going to using it on my workstation, but the command
  line
  version doesn't seem to be geared toward running it in a cron job at
  night.  Apt pretty easily performs that main function of fou4s,
  though
  fou4s present the information and manages it much better.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SPAMs

2006-05-24 Thread Jørn Dahl-Stamnes
Seems like someone has got their hand of my e-mail alias which I use for this 
list only. It started last friday and I get about 10-15 messages every day.

Does someone else has this problem?

What's the procedure to change my subscribtion e-mail addresse?

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SPAMs

2006-05-24 Thread Logan, David (SST - Adelaide)
Hi Jørn,

I myself haven't had too many spams, I get a few anyway from various places my 
email address is placed. I would suggest try unsubscribing using this alias and 
just re-subscribe with your new (known only to you) alias.

Regards

---
** _/ **  David Logan 
***   _/ ***  ITO Delivery Specialist - Database
*_/*  Hewlett-Packard Australia Ltd
_/_/_/  _/_/_/    E-Mail: [EMAIL PROTECTED]
   _/  _/  _/  _/     Desk:   +618 8408 4273
  _/  _/  _/_/_/  Mobile: 0417 268 665
*_/   **
**  _/    Postal: 148 Frome Street,
   _/ **  Adelaide SA 5001
  Australia 
invent   
---

-Original Message-
From: Jrn Dahl-Stamnes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 24 May 2006 9:01 PM
To: mysql@lists.mysql.com
Subject: SPAMs

Seems like someone has got their hand of my e-mail alias which I use for this 
list only. It started last friday and I get about 10-15 messages every day.

Does someone else has this problem?

What's the procedure to change my subscribtion e-mail addresse?

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SPAMs

2006-05-24 Thread Jørn Dahl-Stamnes
On Wednesday 24 May 2006 13:36, Logan, David (SST - Adelaide) wrote:
 Hi Jørn,

 I myself haven't had too many spams, I get a few anyway from various places
 my email address is placed. I would suggest try unsubscribing using this
 alias and just re-subscribe with your new (known only to you) alias.

They must have found my e-mail addresse from this list or from some sort of 
archive.

I don't know which e-mail system (majordomo, listserver, or) this list use. 
But some of the systems do have commands to change the e-mail address without 
doing a unsubscribe and a new subscribe.

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ib_logfile based recovery

2006-05-24 Thread Christopher Korn
Hello everyone, 

Today a script killed a few InnoDB tables in my database. Although I have 
backups (2 days old) I want to recover - if possible - a more current state of 
my database. Is it possible to recover tables via the ib_logfileX files?
If i simply open the files with VIm I can see that much of the data I need is 
in this file. 


Greetings, 
Chris
-- 
--chris [EMAIL PROTECTED]

Bis zu 70% Ihrer Onlinekosten sparen: GMX SmartSurfer!
  Kostenlos downloaden: http://www.gmx.net/de/go/smartsurfer


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Available

2006-05-24 Thread Jan Gomes
Thanks everybody for attention!

I make some changes in hardare and in my table, and obtain a average response 
times of 0.09 second.

Now i have a good time ! :-)


Show you my changes:

1-Update MySql 4.1.18 - 5.0.21.
2-Upgrade my memory to 3 Gb.
3-Severance of temporary/logs files and data files in distinct HDs.
4-Upgrade of key_buffer 384M - 512M.
5-Run command ALTER TABLE ORDER BY on column.(Dan Suggestion. Thanks!!)

I will reproduce this changes in other machine to detect the relevant change!

Show you my relevant variable in config file:

datadir = /data/myisam
key_buffer = 512M
max_allowed_packet = 1M
table_cache = 200
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
thread_concurrency = 8

innodb_data_home_dir = /data/ibdata
innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data/iblog
innodb_log_arch_dir = /usr/local/mysql/data/iblog
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 100M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

Show you my compile options: (OS: Debian Linux 3.1; Hardware Intel Xeon 2.8)

CFLAGS=-O3 CXX=gcc CXXFLAGS=-O3 -felide-constructors -fno-exceptions 
-fno-rtti -fomit-frame-pointer -ffixed-ebp

./configure --prefix=/usr/local/mysql --with-mysqld-user=mysql 
--with-unix-socket-path=/tmp/mysql.sock
--with-mysqld-ldflags=-all-static --enable-assembler


Thanks everybody!!

==
Atenciosamente,
Jan Gomes - [EMAIL PROTECTED]



  The query is:
  SELECT * FROM table
  where id_table2 in (1652272,1652293,1652316,1652362);
 
  +-+---+---+--+
  | select_type | table | type  | possible_keys |
  +-+---+---+--+
  | SIMPLE  | table | range | PRIMARY,table_idx1|
  +-+---+---+--+
 
  ++-+--+--+--+
  | key  | key_len | ref   | rows | Extra  |
  +--- +-+--+--+--+
  | table_idx1 |   4   | NULL |   72 | Using where|
  ++-+--+--+-+
 

 If this is what the EXPLAIN is, and you are getting 4 second response
 times, I would guess that something is amiss.  Have you tried running an
 ANALYZE TABLE on your table to rebuild the index?  Be aware that this
 will issue a table lock on the table, do do it off hours.

 Then, issue the query again and post back time results.

 Also, post the output of SELECT @@version;

 Cheers

 Jay

  PS.: Pardon me for some thing, my english is not very good

 It's just fine!

  Hy Jay,
 
  Thanks for you attention.
 
  Show you my structure and EXPLAIN:
 
  CREATE TABLE `table` (
   `id_table1` int(10) unsigned NOT NULL default '0',
   `id_table2` int(10) unsigned NOT NULL default '0',
   `field1`smallint(5) unsigned NOT NULL default '0',
   `field2`mediumint(8) unsigned NOT NULL default '0',
   `textField` text NOT NULL,
 
   PRIMARY KEY  (`id_table1`,`id_table2`),
   KEY `table_idx1` (`id_table2`)
  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 PACK_KEYS=1
 
 
 
 
  You don't show the query you are explaining.
  It certainly can't be the one in your previous post below, because column
  names don't match.
 
  +-+---+---++
  | select_type | table | type  | possible_keys  |
  +-+---+---++
  | SIMPLE  | table | range | PRIMARY,table_idx1|
  +-+---+---++
 
  ++-+--+--+-+
  | key  | key_len | ref   | rows | Extra |
  +--- +-+--+--+-+
  | table_idx1 |   4   | NULL |   72 | Using where|
  ++-+--+--+-+
 
 
 
  Please post your exact table schema using SHOW CREATE TABLE, and your
  exact query, along with an EXPLAIN SELECT for the query.
 
  Thanks!
 
  -jay
 
  Jan Gomes wrote:
 
 
  Hy Guys,
 
  I have a simple structure of tables, howewer has 50 million of 
  registers and 2,5 GB of data.
  The table is MyIsam and has 4 integer fields and 1 Text field, one 
  primary key with two integer fields
  and one btree index with one integer field.
 
  There is a select in this table using an index(with one integer field), 
  whith a set value
  for this field ( select * from table where field in 
  (value1,value2,value3,value4,etc) ).
 
  This select has delay 4s average.
 
  Is this a good time for the select ? How can I run this select in less 
  time?
 
  I had make this optimization:
 
  1-Compress the index
  2-sort the index with myisamchk
 
 
  PS.: This table is read-only, hasn't an insert, update or delete.
 
 
 
  ==
  Atenciosamente,
  Jan Gomes - [EMAIL PROTECTED]
 
 

 --
 Jay Pipes
 

RE: SPAMs

2006-05-24 Thread George Law
more than likely someone is harvesting emails off groups.google.com 

http://groups.google.com/group/mailing.database.myodbc

Your posting shows top of the list :)



George Law
 -Original Message-
 From: Jørn Dahl-Stamnes [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 24, 2006 7:38 AM
 To: mysql@lists.mysql.com
 Subject: Re: SPAMs
 
 On Wednesday 24 May 2006 13:36, Logan, David (SST - Adelaide) wrote:
  Hi Jørn,
 
  I myself haven't had too many spams, I get a few anyway from various
 places
  my email address is placed. I would suggest try unsubscribing using this
  alias and just re-subscribe with your new (known only to you) alias.
 
 They must have found my e-mail addresse from this list or from some sort
 of
 archive.
 
 I don't know which e-mail system (majordomo, listserver, or) this list
 use.
 But some of the systems do have commands to change the e-mail address
 without
 doing a unsubscribe and a new subscribe.
 
 --
 Jørn Dahl-Stamnes
 homepage: http://www.dahl-stamnes.net/dahls/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I can't connect to mysql server with PHP

2006-05-24 Thread gerald_clark
战芳 wrote:

Hi! gerald_clark,
But when I call mysql_pconnect(localhost:3306,root,root_password),it 
return the same error. How can I get the permission to open 
/var/mysql/lib/mysql.sock?
  Fang

  

what do you get when you do
ls -l /var/mysql/lib/mysql.sock?
Permissions should be srwxrwxrwx


  

fool.ben wrote:



Hi everybody!
I've install a mysql server on my computer. The operating system is Redhat 
fedora core 4. The version of the mysql server is 4.1.3 Beta. I wanna 
connect to the server using the following statement:
$db_connection=mysql_pconnect(localhost,root,);
The server returned the error 2002:
2002 Can't connect to local MySQL server through socket 
'/var/mysql/lib/mysql.sock(13)' 
 

  

The user running the php connection does not have permissions to open
/var/mysql/lib/mysql.sock.



I was suggested that the server may not running, or the sockect is wrong. 
But when I run the following statement, there was no error occured:
localhost#/usr/local/mysql/bin/mysql -uroot -S/var/mysql/lib/mysql.sock
 

  

Here you are root, so you have permissions.



Can anyone help me?
 

  

Fix your permissions or move your socket to a directory that is world
searchable.



Fang
 

  

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SPAMs

2006-05-24 Thread Jørn Dahl-Stamnes
On Wednesday 24 May 2006 14:48, George Law wrote:
 more than likely someone is harvesting emails off groups.google.com

 http://groups.google.com/group/mailing.database.myodbc

 Your posting shows top of the list :)

Yet another reason for not using google. It seems like they help the spammers 
by doing this. It's OK that they show the item in the list, but the e-mail 
addresse should be masked...

-- 
Jørn Dahl-Stamnes
homepage: http://www.dahl-stamnes.net/dahls/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SPAMs

2006-05-24 Thread Keith Roberts
Check this out, it works a treat for me:

http://www.tmda.net

Regards

Keith

In theory, theory and practice are the same;
in practice they are not.

On Wed, 24 May 2006, Jørn Dahl-Stamnes wrote:

 To: mysql@lists.mysql.com
 From: Jørn Dahl-Stamnes [EMAIL PROTECTED]
 Subject: Re: SPAMs
 
 On Wednesday 24 May 2006 14:48, George Law wrote:
  more than likely someone is harvesting emails off groups.google.com
 
  http://groups.google.com/group/mailing.database.myodbc
 
  Your posting shows top of the list :)
 
 Yet another reason for not using google. It seems like they help the spammers 
 by doing this. It's OK that they show the item in the list, but the e-mail 
 addresse should be masked...
 
 -- 
 Jørn Dahl-Stamnes
 homepage: http://www.dahl-stamnes.net/dahls/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: LOAD DATA FROM MASTER stops unfinished with Query OK

2006-05-24 Thread Bgs


No ideas?

I tried playing around with read/write timeouts (even thought the 
replication is fast), all size limits are greater than the whole 
replicated db. The last table with accesses MYD and zero size is a small 
one (a couple of dozens kBs).


Bgs wrote:


 Greetings,

I played around with load data from master (ldfm) and it worked fine in 
test environment. Now I want to replicate our actual db to a slave. When 
I issue the ldfm command, it starts the replication. I get Query OK, but 
only about 5% of the db is replicated. Apparently all tables that are on 
the slave in the end are exact copies of the master tables, but most MYD 
files are zero sized



Any ideas?

Thanks in advance
Bgs




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Noob: Converting to Inner Join

2006-05-24 Thread Graham Anderson

many thanks Chris :)
g


On May 24, 2006, at 1:19 AM, Chris Sansom wrote:


At 23:17 -0700 23/5/06, Graham Anderson wrote:
Are there any advantages to converting this 'working' query below  
to use INNER JOIN ?

If so, what would the correct syntax be ?

Many thanks


SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30


From my limited knowledge (I'm a relative newbie and open to  
correction!) one syntax would be:


SELECT category.name, page.name, content.title, content.body
FROM category INNER JOIN page INNER JOIN content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30

...ie., simply replace your commas with 'INNER JOIN'. Or you could  
do this:


SELECT category.name, page.name, content.title, content.body
FROM category
INNER JOIN page ON page.category_id = category.id
INNER JOIN content ON content.page_id = page.id
WHERE category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30

As for advantages, I'm not sure there are any for this particular  
query. The advantages would arise if you were to combine it with  
different JOINs, eg LEFT JOIN, because (in MySQL 5.x anyway) 'INNER  
JOIN' has a higher syntactical priority than the comma, which is  
the lowest priority of all. In other words, if you were to put a  
LEFT JOIN after your comma joins, MySQL would try to execute the  
LEFT JOIN first, but if you used INNER JOIN, that would be done first.


I think that's about right. :-)

--
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

I wonder who discovered you could get milk from a cow...
and what on _earth_ did he think he was doing?
   -- Billy Connolly



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ib_logfile based recovery

2006-05-24 Thread Kishore Jalleda

On 5/24/06, Christopher Korn [EMAIL PROTECTED] wrote:

Hello everyone,

Today a script killed a few InnoDB tables in my database. Although I have backups (2 days 
old) I want to recover - if possible - a more current state of my database. Is it 
possible to recover tables via the ib_logfileX files?
If i simply open the files with VIm I can see that much of the data I need is 
in this file.


Greetings,
Chris
--
--chris [EMAIL PROTECTED]




Do you have binary logging enabled from the time of your last backup
,if so you could try to run it to get back the data upto the point
where your tables were deleted. You could try doing this accoring to
the manual at

http://dev.mysql.com/tech-resources/articles/point_in_time_recovery.html

It has various methods which will suit your needs, just follow them carefully...
Thats the best way to recover lost data due to an unfortunate delete(s)

The ib_logfileX files are log files used by the innodb storage engine
mainly to rollback any uncommitted transactions so that the database
is in a consistent state after it recovers from a crash, so I don't
think you could use that file for recovery of lost data, and yes the
file is basically text, but its only understood by the innodb storage
engine ( as far as I know)

Kishore Jalleda
http://kjalleda.googlepages.com/mysqlprojects

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fun with Dates and Incentives.

2006-05-24 Thread Brian Menke
I'm hoping for some general advice on an approach for the following
scenario:

 

I have a customer who wants to put an incentive program in place for
students taking learning modules and then completing tests. The concept is
simple. Award the first 10 people who complete a test with a score of
100%... that type of thing. Students are allowed to take test more than
once. Track each time the student takes the test and show the latest score
ect. You get the idea. I have the database tables and relationships already
all set up for the tests, but it's the tracking of the dates and times that
I don't have and it got me thinking.

 

I need to track down to the day/hour/minute level. Okay, that should be easy
(I think). I'm going to need to do a lot of date/time calculations. Would it
be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
field? Or, is their something else I should be using? I have limited
experience having to munge and crunch date/time info and I want to make sure
I have the flexibility to do what I need in the future.

 

The next gotcha I thought up is what about different time zones. Obviously
without this consideration, people on the East coast would have an unfair 3
hour advantage over people on the west coast. I guess I can have a time zone
field in my student table so I could derive the time difference. Any
suggestions on a good time zone approach?

 

Here are my two tables as they stand now. I'm wondering if these are set up
in a way to allow me to do all this date time crunching I'm going to need to
do in the future? Any suggestions are greatly appreciated :-)

 

 

CREATE TABLE `students` (

  `store_id` varchar(6) NOT NULL,

  `email` varchar(64) NOT NULL,

  `fname` varchar(32) NOT NULL,

  `lname` varchar(32) NOT NULL,

  `role` char(2) NOT NULL default '5',

  `password` varchar(8) NOT NULL,

  `phone` varchar(24) default NULL,

  `reg_date` date default NULL,

  PRIMARY KEY  (`email`),

  UNIQUE KEY `email` (`email`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

CREATE TABLE `completed_modules` (

  `module_id` char(2) NOT NULL default '',

  `email` varchar(64) NOT NULL,

  `score` int(2) NOT NULL default '0',

  `time` timestamp NOT NULL default CURRENT_TIMESTAMP

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

Brian Menke

Visual Matter, Inc

1445 Foxworthy Ave., Suite 50-215

San Jose, CA 95118

408 375 9969

 

San Jose ~ Los Angeles
www.visualmatter.com 

 



Re: Noob: Converting to Inner Join

2006-05-24 Thread Peter Brawley

Are there any advantages to converting this 'working' query below to
use INNER JOIN ?
If so, what would the correct syntax be ?

SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30

Explicit INNER JOINs are easier to read, easier to debug, and since 
5.0.12 always preferable in MySQL for reasons given at 
http://dev.mysql.com/doc/refman/5.1/en/join.html (look for '5.0.12').


SELECT category.name, page.name, content.title, content.body
FROM category
INNER JOIN content USING (category_id)
INNER JOIN page USING (page_id)
WHERE category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30

PB



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Noob: Converting to Inner Join

2006-05-24 Thread Chris Sansom

At 23:17 -0700 23/5/06, Graham Anderson wrote:

Are there any advantages to converting this 'working' query below to
use INNER JOIN ?
If so, what would the correct syntax be ?

SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30


And at 11:52 -0500 24/5/06, Peter Brawley wrote:
Explicit INNER JOINs are easier to read, easier to debug, and since 
5.0.12 always preferable in MySQL for reasons given at 
http://dev.mysql.com/doc/refman/5.1/en/join.html (look for '5.0.12').


SELECT category.name, page.name, content.title, content.body
FROM category
INNER JOIN content USING (category_id)
INNER JOIN page USING (page_id)
WHERE category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30


Actually, although I've never used the USING clause - I just looked 
it up - I don't think this would work. Surely the column name has to 
exist in both tables? Graham is using page.category_id and 
category.id, content.page_id and page.id, so I think ON (as I posted 
earlier) is the only way to do this.


Willing to be corrected though. :-)

--
Cheers... Chris
Highway 57 Web Development -- http://highway57.co.uk/

It was a woman who drove me to alcohol, I must write and thank her
   -- W.C. Fields

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fun with Dates and Incentives.

2006-05-24 Thread George Law
Brian,

Just my 2 cents :)

I always try to use an epoch time stamp for anything time related.  its
easier to compare times and all the functions are built into mysql to
convert to/from a unix epoch timestamp

select unix_timestamp(NOW());
+---+
| unix_timestamp(NOW()) |
+---+
|1148492137 |


select from_unixtime(1148492137);
+---+
| from_unixtime(1148492137) |
+---+
| 2006-05-24 13:35:37   |


Need the number of minutes between 2 epoch timestamp? subtract and
divide by 60.

select (unix_timestamp(NOW()) - 1148492137)/60;
+-+
| (unix_timestamp(NOW()) - 1148492137)/60 |
+-+
|2.32 |



George Law

 -Original Message-
 From: Brian Menke [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 24, 2006 12:51 PM
 To: mysql@lists.mysql.com
 Subject: Fun with Dates and Incentives.
 
 I'm hoping for some general advice on an approach for the following
 scenario:
 
 
 
 I have a customer who wants to put an incentive program in place for
 students taking learning modules and then completing tests. The
concept is
 simple. Award the first 10 people who complete a test with a score of
 100%... that type of thing. Students are allowed to take test more
than
 once. Track each time the student takes the test and show the latest
score
 ect. You get the idea. I have the database tables and relationships
 already
 all set up for the tests, but it's the tracking of the dates and times
 that
 I don't have and it got me thinking.
 
 
 
 I need to track down to the day/hour/minute level. Okay, that should
be
 easy
 (I think). I'm going to need to do a lot of date/time calculations.
Would
 it
 be best just to have a default of CURRENT_TIMESTAMP set for a
TIMESTAMP
 field? Or, is their something else I should be using? I have limited
 experience having to munge and crunch date/time info and I want to
make
 sure
 I have the flexibility to do what I need in the future.
 
 
 
 The next gotcha I thought up is what about different time zones.
Obviously
 without this consideration, people on the East coast would have an
unfair
 3
 hour advantage over people on the west coast. I guess I can have a
time
 zone
 field in my student table so I could derive the time difference. Any
 suggestions on a good time zone approach?
 
 
 
 Here are my two tables as they stand now. I'm wondering if these are
set
 up
 in a way to allow me to do all this date time crunching I'm going to
need
 to
 do in the future? Any suggestions are greatly appreciated :-)
 
 
 
 
 
 CREATE TABLE `students` (
 
   `store_id` varchar(6) NOT NULL,
 
   `email` varchar(64) NOT NULL,
 
   `fname` varchar(32) NOT NULL,
 
   `lname` varchar(32) NOT NULL,
 
   `role` char(2) NOT NULL default '5',
 
   `password` varchar(8) NOT NULL,
 
   `phone` varchar(24) default NULL,
 
   `reg_date` date default NULL,
 
   PRIMARY KEY  (`email`),
 
   UNIQUE KEY `email` (`email`)
 
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
 CREATE TABLE `completed_modules` (
 
   `module_id` char(2) NOT NULL default '',
 
   `email` varchar(64) NOT NULL,
 
   `score` int(2) NOT NULL default '0',
 
   `time` timestamp NOT NULL default CURRENT_TIMESTAMP
 
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
 Brian Menke
 
 Visual Matter, Inc
 
 1445 Foxworthy Ave., Suite 50-215
 
 San Jose, CA 95118
 
 408 375 9969
 
 
 
 San Jose ~ Los Angeles
 www.visualmatter.com
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Benchmarking

2006-05-24 Thread Dan Trainor

Hi -

It's been a short while since I've seen any discussion on this subject, 
and I'm wondering what's happened in this arena since then.


I'm curious as to what you guys use for benchmarking nowadays.  I'd like 
to benchmark preformance of an InnoDB database on a fancy new server, 
compared to an old degraded one.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replicating queries to testing server

2006-05-24 Thread Dan Trainor

Hi -

I would like to be able to replicate all queries from a live MySQL 
server, to a testing server at the office.


The reason for doing this is to test load under [semi]real-world 
conditions with the new server.  I think that by doing something like 
this, I would be able to fine-tune the new server in preparation for 
replacing the original server.


So what I can't figure out right now is how to set up a situation like 
this, where the live server would replicate every one of it's queries to 
the testing machine, and have the testing machine not respond with 
anything - just simply mow through the queries.


The testing server will have a snapshot of the live database, so I will 
ahve data to work with.  However, the testing machine is on a private 
internal subnet, and I don't see how this type of setup would work from 
a logical MySQL standpoint.


Keeping all this in mind, also remember that I cannot change any of the 
code which references the MySQL server.  I need to be able to do this 
using some native MySQL function.


Any feedback would be greatly appreciated.  I look forward to all your 
responses.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LOAD DATA FROM MASTER stops unfinished with Query OK

2006-05-24 Thread Dan Trainor

Bgs wrote:


No ideas?

I tried playing around with read/write timeouts (even thought the 
replication is fast), all size limits are greater than the whole 
replicated db. The last table with accesses MYD and zero size is a small 
one (a couple of dozens kBs).


Bgs wrote:



 Greetings,

I played around with load data from master (ldfm) and it worked fine 
in test environment. Now I want to replicate our actual db to a slave. 
When I issue the ldfm command, it starts the replication. I get Query 
OK, but only about 5% of the db is replicated. Apparently all tables 
that are on the slave in the end are exact copies of the master 
tables, but most MYD files are zero sized



Any ideas?

Thanks in advance
Bgs






Hi -

Which storage engine are you using for the tables or database which 
you're trying to replicate?


Thanks
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating queries to testing server

2006-05-24 Thread Dan Trainor

Dan Trainor wrote:

Hi -

I would like to be able to replicate all queries from a live MySQL 
server, to a testing server at the office.


The reason for doing this is to test load under [semi]real-world 
conditions with the new server.  I think that by doing something like 
this, I would be able to fine-tune the new server in preparation for 
replacing the original server.


So what I can't figure out right now is how to set up a situation like 
this, where the live server would replicate every one of it's queries to 
the testing machine, and have the testing machine not respond with 
anything - just simply mow through the queries.


The testing server will have a snapshot of the live database, so I will 
ahve data to work with.  However, the testing machine is on a private 
internal subnet, and I don't see how this type of setup would work from 
a logical MySQL standpoint.


Keeping all this in mind, also remember that I cannot change any of the 
code which references the MySQL server.  I need to be able to do this 
using some native MySQL function.


Any feedback would be greatly appreciated.  I look forward to all your 
responses.


Thanks!
-dant



Hi -

So I was thinking about this more, and then it dawned on me.  This is 
simple MySQL replication.


Sorry for wasting the time.

Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I add a column only if it doesn't exist?

2006-05-24 Thread Daevid Vincent
I want to do something like this:

if not exists `hotel_page_templates`.`hpt_custom_fields`
alter table `hotel_page_templates` add column `hpt_custom_fields` text after
`hpt_alternate_username`;


ÐÆ5ÏÐ 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating queries to testing server

2006-05-24 Thread nigel wood

Dan Trainor wrote:


Dan Trainor wrote:


Hi -

I would like to be able to replicate all queries from a live MySQL 
server, to a testing server at the office.


The reason for doing this is to test load under [semi]real-world 
conditions with the new server.



Hi -

So I was thinking about this more, and then it dawned on me.  This is 
simple MySQL replication.


Sorry for wasting the time.


No, it isn't. Selects aren't replicated nor is the timing true to catch 
contention problems.  I've done this recently and whilst I don't have 
time now I'll post my solution tomorrow. Assuming:


A) Your using *nux
B) Your application(s) and database are hosted on different machines
C) You have root access on one of the boxes
D) You can take a live db snapshot

It will let you record and playback your database server's load.

HTH

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problems with triggers

2006-05-24 Thread Ivan Milanez Castellanos

Hi, I'm having problems with a trigger that I'm working on, I want to create
a trigger that after inserting a row in table A will copy the row to table B
and once there will delete the row from table A.

I'm using two triggers for this one goes on table A and does the copy of the
information to table B after insert.

The other goes on table B and performs (or at least it tries) the delete
command on table A after insert.

The short version o all of this is this:

CREATE DEFINER='[EMAIL PROTECTED]' TRIGGER `DB`.`insTrigger` AFTER INSERT ON 
`DB`.`A`
 FOR EACH ROW begin
insert into B
values(
new.idA
);
end;

CREATE DEFINER='[EMAIL PROTECTED]' TRIGGER `DB`.`delTrigger` AFTER INSERT ON 
`DB`.`B`
 FOR EACH ROW begin
delete from A where idA = new.idB;
end;

But all I get when i try to insert in A is:

Can't update table 'reporte' in stored function/trigger because it is
already used by statement which invoked this stored function/trigger.

Any ideas?

TIA
--
Ivan Milanez Castellanos
+---+
| Artificial Intelligence will  |
| Never be able to compete with |
| Natural Stupidity |
|  -- Isaac Asimov --   |
+---+


Re: Replicating queries to testing server

2006-05-24 Thread Dan Trainor

nigel wood wrote:

Dan Trainor wrote:


Dan Trainor wrote:


Hi -

I would like to be able to replicate all queries from a live MySQL 
server, to a testing server at the office.


The reason for doing this is to test load under [semi]real-world 
conditions with the new server.




Hi -

So I was thinking about this more, and then it dawned on me.  This is 
simple MySQL replication.


Sorry for wasting the time.



No, it isn't. Selects aren't replicated nor is the timing true to catch 
contention problems.  I've done this recently and whilst I don't have 
time now I'll post my solution tomorrow. Assuming:


A) Your using *nux
B) Your application(s) and database are hosted on different machines
C) You have root access on one of the boxes
D) You can take a live db snapshot

It will let you record and playback your database server's load.

HTH

Nigel



Hi, Nigel -

My situation does meet the above requirements, and I would be very 
greatful if you were to spend a moment on this tomorrow when you get 
time.  I look forward to your reply.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Noob: Converting to Inner Join

2006-05-24 Thread Peter Brawley

Surely the column name has to exist in both tables?
Indeed.
Graham is using page.category_id and category.id, content.page_id and 
page.id

His column specs weren't complete. If you're right on this, though...
, so I think ON (as I posted earlier) is the only way to do this.
...you're right on that too.

PB


Chris Sansom wrote:

At 23:17 -0700 23/5/06, Graham Anderson wrote:

Are there any advantages to converting this 'working' query below to
use INNER JOIN ?
If so, what would the correct syntax be ?

SELECT category.name, page.name, content.title, content.body
FROM category, page, content
WHERE content.page_id = page.id
AND page.category_id = category.id
AND category.id =1
ORDER BY content.order_id ASC
LIMIT 0 , 30


And at 11:52 -0500 24/5/06, Peter Brawley wrote:
Explicit INNER JOINs are easier to read, easier to debug, and since 
5.0.12 always preferable in MySQL for reasons given at 
http://dev.mysql.com/doc/refman/5.1/en/join.html (look for '5.0.12').


SELECT category.name, page.name, content.title, content.body
FROM category
INNER JOIN content USING (category_id)
INNER JOIN page USING (page_id)
WHERE category.id = 1
ORDER BY content.order_id ASC
LIMIT 0 , 30


Actually, although I've never used the USING clause - I just looked it 
up - I don't think this would work. Surely the column name has to 
exist in both tables? Graham is using page.category_id and 
category.id, content.page_id and page.id, so I think ON (as I posted 
earlier) is the only way to do this.


Willing to be corrected though. :-)




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-24 Thread Jason Dimberg
I think doing a sort by date with limit 10 should get you the first 
ten.  I believe the now() function uses the server time, so no need to 
do date/time calcs really.


Good luck,
Jason

Brian Menke wrote:

I'm hoping for some general advice on an approach for the following
scenario:

 


I have a customer who wants to put an incentive program in place for
students taking learning modules and then completing tests. The concept is
simple. Award the first 10 people who complete a test with a score of
100%... that type of thing. Students are allowed to take test more than
once. Track each time the student takes the test and show the latest score
ect. You get the idea. I have the database tables and relationships already
all set up for the tests, but it's the tracking of the dates and times that
I don't have and it got me thinking.

 


I need to track down to the day/hour/minute level. Okay, that should be easy
(I think). I'm going to need to do a lot of date/time calculations. Would it
be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
field? Or, is their something else I should be using? I have limited
experience having to munge and crunch date/time info and I want to make sure
I have the flexibility to do what I need in the future.

 


The next gotcha I thought up is what about different time zones. Obviously
without this consideration, people on the East coast would have an unfair 3
hour advantage over people on the west coast. I guess I can have a time zone
field in my student table so I could derive the time difference. Any
suggestions on a good time zone approach?

 


Here are my two tables as they stand now. I'm wondering if these are set up
in a way to allow me to do all this date time crunching I'm going to need to
do in the future? Any suggestions are greatly appreciated :-)

 

 


CREATE TABLE `students` (

  `store_id` varchar(6) NOT NULL,

  `email` varchar(64) NOT NULL,

  `fname` varchar(32) NOT NULL,

  `lname` varchar(32) NOT NULL,

  `role` char(2) NOT NULL default '5',

  `password` varchar(8) NOT NULL,

  `phone` varchar(24) default NULL,

  `reg_date` date default NULL,

  PRIMARY KEY  (`email`),

  UNIQUE KEY `email` (`email`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 


CREATE TABLE `completed_modules` (

  `module_id` char(2) NOT NULL default '',

  `email` varchar(64) NOT NULL,

  `score` int(2) NOT NULL default '0',

  `time` timestamp NOT NULL default CURRENT_TIMESTAMP

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 


Brian Menke

Visual Matter, Inc

1445 Foxworthy Ave., Suite 50-215

San Jose, CA 95118

408 375 9969

 


San Jose ~ Los Angeles
www.visualmatter.com 

 



  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-24 Thread Peter Brawley




Brian

Re your schema, 
 --it's redundant to define PRIMARY and UNIQUE keys on the same column,
 --why not an INT student id?
 --what if two (eg married) students share an email account?
 --comparing datetimes across multiple time zones will be simpler if
you 
 set completed_modules.time=UTC_TIMESTAMP in each new row of that
table.

That would give ...

CREATE TABLE students (
 id INT NOT NULL, -- auto_increment [simplest] or assigned by
school?
 email varchar(64) NOT NULL,
 fname varchar(32) NOT NULL,
 lname varchar(32) NOT NULL,
 role char(2) NOT NULL default '5',
 password varchar(8) NOT NULL,
 phone varchar(24) default NULL,
 reg_date date default NULL,
 PRIMARY KEY (id),
 KEY email (email)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE completed_modules (
 id INT NOT NULL,
 module_id char(2) NOT NULL default '',
 score INT NOT NULL default 0,
 time timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

To find the first 10 scores of 100 on a particular module, just ...

SELECT
 CONCAT(s.lname,', ',s.fname) AS Name,
 c.time,
 c.score
FROM students s
INNER JOIN completed_modules c USING (id)
WHERE c.module_id = 1 AND c.score = 100
ORDER BY c.time ASC
LIMIT 10;

PB


  I'm hoping for some general advice on an approach for the following
scenario:

 

I have a customer who wants to put an incentive program in place for
students taking learning modules and then completing tests. The concept is
simple. Award the first 10 people who complete a test with a score of
100%... that type of thing. Students are allowed to take test more than
once. Track each time the student takes the test and show the latest score
ect. You get the idea. I have the database tables and relationships already
all set up for the tests, but it's the tracking of the dates and times that
I don't have and it got me thinking.

 

I need to track down to the day/hour/minute level. Okay, that should be easy
(I think). I'm going to need to do a lot of date/time calculations. Would it
be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
field? Or, is their something else I should be using? I have limited
experience having to munge and crunch date/time info and I want to make sure
I have the flexibility to do what I need in the future.

 

The next gotcha I thought up is what about different time zones. Obviously
without this consideration, people on the East coast would have an unfair 3
hour advantage over people on the west coast. I guess I can have a time zone
field in my student table so I could derive the time difference. Any
suggestions on a good time zone approach?

 

Here are my two tables as they stand now. I'm wondering if these are set up
in a way to allow me to do all this date time crunching I'm going to need to
do in the future? Any suggestions are greatly appreciated :-)

 

 

CREATE TABLE `students` (

  `store_id` varchar(6) NOT NULL,

  `email` varchar(64) NOT NULL,

  `fname` varchar(32) NOT NULL,

  `lname` varchar(32) NOT NULL,

  `role` char(2) NOT NULL default '5',

  `password` varchar(8) NOT NULL,

  `phone` varchar(24) default NULL,

  `reg_date` date default NULL,

  PRIMARY KEY  (`email`),

  UNIQUE KEY `email` (`email`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

CREATE TABLE `completed_modules` (

  `module_id` char(2) NOT NULL default '',

  `email` varchar(64) NOT NULL,

  `score` int(2) NOT NULL default '0',

  `time` timestamp NOT NULL default CURRENT_TIMESTAMP

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

Brian Menke

Visual Matter, Inc

1445 Foxworthy Ave., Suite 50-215

San Jose, CA 95118

408 375 9969

 

San Jose ~ Los Angeles
www.visualmatter.com 

 


  
  

No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006
  



No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Administrator failing to run on FC5

2006-05-24 Thread Whil Hentzen

Hi folks,

Brand new Fedora Core 5 box running MySQL 5. I can get into the monitor 
and connect, work with databases, etc. I can't get the MySQL 
Administrator to run - get lots of errors:


1. I downloaded mysql-administrator-1.1.10-1.i386.rpm.
2. As root, in /usr/bin, did
 rpm -Uivh /home/zips/mysql-administrator-1.1.10-1.i386.rpm
3. cd /usr/bin
4. as root and as regular user: # mysql-administrator

Got a slew (40? 50?) of error messages, all bgin with
mysql-administrator-bin:5787):Glib-GObject-CRITICAL **: lots of msgs

Most having to do with assertion failed.

Did I install incorrectly or in the wrong place, or is something else wrong?

Thanks,

Whil

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fun with Dates and Incentives.

2006-05-24 Thread Brian Menke
Peter, thanks for the detailed info. I will figure out how to get rid of the
UNIQUE key. Somehow that got added. Thanks for the catch. As far as INT for
student id goes, I'm using email because it will be unique, and offers an
easy way to track a user through the app I'm building (user name, password,
session id's etc.) but I do get what you are saying. Thanks for the
UTC_TIMESTAMP suggestion. Although, since I haven't had a lot of experience,
I don't really understand why it is better than CURRENT_TIMESTAMP. But.
that's why I asked for advice :-)

 

-Brian

 

  _  

From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 24, 2006 1:31 PM
To: Brian Menke
Cc: mysql@lists.mysql.com
Subject: Re: Fun with Dates and Incentives.

 

Brian

Re your schema, 
  --it's redundant to define PRIMARY and UNIQUE keys on the same column,
  --why not an INT student id?
  --what if two (eg married) students share an email account?
  --comparing datetimes across multiple time zones will be simpler if you  
set completed_modules.time=UTC_TIMESTAMP in each new row of that table.

That would give ...

CREATE TABLE students (
  id INT NOT NULL,-- auto_increment [simplest] or assigned by
school?
  email varchar(64) NOT NULL,
  fname varchar(32) NOT NULL,
  lname varchar(32) NOT NULL,
  role char(2) NOT NULL default '5',
  password varchar(8) NOT NULL,
  phone varchar(24) default NULL,
  reg_date date default NULL,
  PRIMARY KEY (id),
  KEY email (email)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE completed_modules (
  id INT NOT NULL,
  module_id char(2) NOT NULL default '',
  score INT NOT NULL default 0,
  time timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

To find the first 10 scores of 100 on a particular module, just ...

SELECT
  CONCAT(s.lname,', ',s.fname) AS Name,
  c.time,
  c.score
FROM students s
INNER JOIN completed_modules c USING (id)
WHERE c.module_id = 1 AND c.score = 100
ORDER BY c.time ASC
LIMIT 10;

PB




I'm hoping for some general advice on an approach for the following
scenario:
 
 
 
I have a customer who wants to put an incentive program in place for
students taking learning modules and then completing tests. The concept is
simple. Award the first 10 people who complete a test with a score of
100%... that type of thing. Students are allowed to take test more than
once. Track each time the student takes the test and show the latest score
ect. You get the idea. I have the database tables and relationships already
all set up for the tests, but it's the tracking of the dates and times that
I don't have and it got me thinking.
 
 
 
I need to track down to the day/hour/minute level. Okay, that should be easy
(I think). I'm going to need to do a lot of date/time calculations. Would it
be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
field? Or, is their something else I should be using? I have limited
experience having to munge and crunch date/time info and I want to make sure
I have the flexibility to do what I need in the future.
 
 
 
The next gotcha I thought up is what about different time zones. Obviously
without this consideration, people on the East coast would have an unfair 3
hour advantage over people on the west coast. I guess I can have a time zone
field in my student table so I could derive the time difference. Any
suggestions on a good time zone approach?
 
 
 
Here are my two tables as they stand now. I'm wondering if these are set up
in a way to allow me to do all this date time crunching I'm going to need to
do in the future? Any suggestions are greatly appreciated :-)
 
 
 
 
 
CREATE TABLE `students` (
 
  `store_id` varchar(6) NOT NULL,
 
  `email` varchar(64) NOT NULL,
 
  `fname` varchar(32) NOT NULL,
 
  `lname` varchar(32) NOT NULL,
 
  `role` char(2) NOT NULL default '5',
 
  `password` varchar(8) NOT NULL,
 
  `phone` varchar(24) default NULL,
 
  `reg_date` date default NULL,
 
  PRIMARY KEY  (`email`),
 
  UNIQUE KEY `email` (`email`)
 
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
CREATE TABLE `completed_modules` (
 
  `module_id` char(2) NOT NULL default '',
 
  `email` varchar(64) NOT NULL,
 
  `score` int(2) NOT NULL default '0',
 
  `time` timestamp NOT NULL default CURRENT_TIMESTAMP
 
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
Brian Menke
 
Visual Matter, Inc
 
1445 Foxworthy Ave., Suite 50-215
 
San Jose, CA 95118
 
408 375 9969
 
 
 
San Jose ~ Los Angeles
www.visualmatter.com 
 
 
 
 
  
 





  _  



 
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006
  


mysql performance

2006-05-24 Thread Moritz Möller
Hi list,

we're running some large high-traffic mysql servers, and are currently
reaching the limit of our machines.

We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware is
quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
CPU.
The queries run very fast (I seldom see a process that's running longer than
a second), but there are too many of them, I guess.

As far as I know, NDB keeps the whole database in memory, so with indices
and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
NDB :(

Does someone know other solutions to this? Is NDB the only storage engine
supporting clustering?

Thanks in advantage,

Moritz



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I turn off error checking

2006-05-24 Thread Daevid Vincent
Semi related to this, as it appears in my searching that this is
unfortunately not a supported feature. Is there a flag or something that I
can put in my .sql file that will turn OFF any error checking, execute my
statements (so that even if one fails, the next one will be tried), then
turn it back on again at the end of the file? Sort of like how you can do
the SET FOREIGN_KEY_CHECKS=0; so something akin to SET ERROR_CHECKS=0;
or SET FORCE=1;

And, yes, I do know that there is a --force option for the importing, but I
cannot use that in my case. This is an automated script and generally I want
the SQL to fail on any errors b/c then I know the upgrade is hosed. This is
a special case where some people got a SQL upgrade (by hand) and some
didn't, hence the discrepency between the schemas.

DÆVID  

 -Original Message-
 From: Daevid Vincent [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, May 24, 2006 12:13 PM
 To: mysql@lists.mysql.com
 Subject: How do I add a column only if it doesn't exist?
 
 I want to do something like this:
 
 if not exists `hotel_page_templates`.`hpt_custom_fields`
 alter table `hotel_page_templates` add column 
 `hpt_custom_fields` text after
 `hpt_alternate_username`;
 
 
 ÐÆ5ÏÐ 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-24 Thread Peter Brawley

Brian,

Somehow the server ate my response...

Re your schema,
 --it's redundant to define PRIMARY and UNIQUE keys on the same column,
 --why not an INT student id?
 --what if two (eg married) students share an email account?
 --comparing datetimes across multiple time zones will be simpler if you  
   set completed_modules.time=UTC_TIMESTAMP in each new row of that table.


That would give ...

CREATE TABLE students (
 id INT NOT NULL,-- auto_increment [simplest] or assigned by 
school?

 email varchar(64) NOT NULL,
 fname varchar(32) NOT NULL,
 lname varchar(32) NOT NULL,
 role char(2) NOT NULL default '5',
 password varchar(8) NOT NULL,
 phone varchar(24) default NULL,
 reg_date date default NULL,
 PRIMARY KEY (id),
 KEY email (email)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE completed_modules (
 id INT NOT NULL,
 module_id char(2) NOT NULL default '',
 score INT NOT NULL default 0,
 time timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

To find the first 10 scores of 100 on a particular module, just ...

SELECT
 CONCAT(s.lname,', ',s.fname) AS Name,
 c.time,
 c.score
FROM students s
INNER JOIN completed_modules c USING (id)
WHERE c.module_id = 1 AND c.score = 100
ORDER BY c.time ASC
LIMIT 10;

PB




--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.1/347 - Release Date: 5/24/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fun with Dates and Incentives.

2006-05-24 Thread Peter Brawley




Brian,

CURRENT_TIMESTAMP gives you time in your server's timezone.
UTC_TIMESTAMP gives GM (universal) time, so dispenses with all timezone
adjustments. Trouble is, you cannot use it as a defalt. You have to
pass it as an INSERT value.

PB

-

Brian Menke wrote:

  
  

  
  
  
  
  
  
  Peter,
thanks for the detailed info. I
will figure out how to get rid of the UNIQUE key. Somehow that got
added.
Thanks for the catch. As far as INT for student id goes, Im using
email because
it will be unique, and offers an easy way to track a user through the
app Im
building (user name, password, session ids etc.) but I do get what you
are saying. Thanks for the UTC_TIMESTAMP suggestion. Although, since I
havent
had a lot of experience, I dont really understand why it is better
than CURRENT_TIMESTAMP.
But thats why I asked for advice J
  
  -Brian
  
  
  
  
  From:
Peter Brawley [mailto:[EMAIL PROTECTED]] 
  Sent: Wednesday, May
24, 2006 1:31
PM
  To: Brian Menke
  Cc:
mysql@lists.mysql.com
  Subject: Re: Fun with
Dates and
Incentives.
  
  
  Brian
  
Re your schema, 
 --it's redundant to define PRIMARY and UNIQUE keys on the same column,
 --why not an INT student id?
 --what if two (eg married) students share an email account?
 --comparing datetimes across multiple time zones will be simpler if
you

 set completed_modules.time=UTC_TIMESTAMP in each new row of
that table.
  
That would give ...
  
CREATE TABLE students (
 id INT NOT NULL, --
auto_increment [simplest] or assigned by school?
 email varchar(64) NOT NULL,
 fname varchar(32) NOT NULL,
 lname varchar(32) NOT NULL,
 role char(2) NOT NULL default '5',
 password varchar(8) NOT NULL,
 phone varchar(24) default NULL,
 reg_date date default NULL,
 PRIMARY KEY (id),
 KEY email (email)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
  
CREATE TABLE completed_modules (
 id INT NOT NULL,
 module_id char(2) NOT NULL default '',
 score INT NOT NULL default 0,
 time timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
  
To find the first 10 scores of 100 on a particular module, just ...
  
SELECT
 CONCAT(s.lname,', ',s.fname) AS Name,
 c.time,
 c.score
FROM students s
INNER JOIN completed_modules c USING (id)
WHERE c.module_id = 1 AND c.score = 100
ORDER BY c.time ASC
LIMIT 10;
  
PB
  
  
  
  I'm hoping for some general advice on an approach for the following
  scenario:
  
   
  
  I have a customer who wants to put an incentive program in place for
  students taking learning modules and then completing tests. The concept is
  simple. Award the first 10 people who complete a test with a score of
  100%... that type of thing. Students are allowed to take test more than
  once. Track each time the student takes the test and show the latest score
  ect. You get the idea. I have the database tables and relationships already
  all set up for the tests, but it's the tracking of the dates and times that
  I don't have and it got me thinking.
  
   
  
  I need to track down to the day/hour/minute level. Okay, that should be easy
  (I think). I'm going to need to do a lot of date/time calculations. Would it
  be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
  field? Or, is their something else I should be using? I have limited
  experience having to munge and crunch date/time info and I want to make sure
  I have the flexibility to do what I need in the future.
  
   
  
  The next gotcha I thought up is what about different time zones. Obviously
  without this consideration, people on the East coast would have an unfair 3
  hour advantage over people on the west coast. I guess I can have a time zone
  field in my student table so I could derive the time difference. Any
  suggestions on a good time zone approach?
  
   
  
  Here are my two tables as they stand now. I'm wondering if these are set up
  in a way to allow me to do all this date time crunching I'm going to need to
  do in the future? Any suggestions are greatly appreciated :-)
  
   
  
   
  
  CREATE TABLE `students` (
  
   `store_id` varchar(6) NOT NULL,
  
   `email` varchar(64) NOT NULL,
  
   `fname` varchar(32) NOT NULL,
  
   `lname` varchar(32) NOT NULL,
  
   `role` char(2) NOT NULL default '5',
  
   `password` varchar(8) NOT NULL,
  
   `phone` varchar(24) default NULL,
  
   `reg_date` date default NULL,
  
   PRIMARY KEY (`email`),
  
   UNIQUE KEY `email` (`email`)
  
  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
  
   
  
  CREATE TABLE `completed_modules` (
  
   `module_id` char(2) NOT NULL default '',
  
   `email` varchar(64) NOT NULL,
  
   `score` int(2) NOT NULL default '0',
  
   `time` timestamp NOT NULL default CURRENT_TIMESTAMP
  
  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
  
   
  
  Brian Menke
  
  Visual Matter, Inc
  
  1445 Foxworthy Ave., Suite 50-215
  
  San Jose, CA 95118
  
  408 375 9969
  
   
  
  San Jose ~ Los Angeles
  www.visualmatter.com 
  
   
  
  
   
  
  




  
  No virus found in this incoming message.
  

Re: mysql performance

2006-05-24 Thread Dan Trainor

Moritz Möller wrote:

Hi list,

we're running some large high-traffic mysql servers, and are currently
reaching the limit of our machines.

We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware is
quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
CPU.
The queries run very fast (I seldom see a process that's running longer than
a second), but there are too many of them, I guess.

As far as I know, NDB keeps the whole database in memory, so with indices
and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
NDB :(

Does someone know other solutions to this? Is NDB the only storage engine
supporting clustering?

Thanks in advantage,

Moritz





Hi -

That's quite a large database.  I, too, have been dealing with what I 
thought was a large database for this new project.  Being 2G, it hardly 
compares to your database size.


Keep in mind, however, that a 36G ibdata file does not necessarily mean 
that you are using 36G to store data.  InnoDB documents from the MySQL 
site explain ways to compact these files, possibly shrinking the size of 
ibdata files.  Another way to get a better idea of how much data you're 
actually using is to use the 'SHOW TABLE STATUS' query from within 
MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
subtract this from the total size of the ibdata file(s).  This will give 
you a more accurate representation of how much of that ibdata file 
you're actually using.  I think.  (Someone mind correcting me if I'm way 
off here?)


NDB may not be your solution.  Even though disk-based storage is 
included with NDB in 5.1 and beyond, I'm not too sure how this will 
affect the speed of your operations.  I suppose it's worth a try, however.


Please take this advise with a grain of salt, as InnoDB is still quite 
new to me, as well.  Other things I've found to speed up large databases 
are to properly make indexes, and testing them with the EXPLAIN 
function.  This alone has let me to speed up our operations as much as 
30% in most cases.


Thanks
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query problem: UNION in subquery

2006-05-24 Thread Luke

A big Thank you goes to you! That was it!

Looks like I tried with too many parentheses i.e.

/this is wrong/

SELECT   FROM ...

(
(SELECT   FROM ...)

UNION

(SELECT   FROM ...)

) AS abc


Regards,
Luke


- Original Message - 
From: Neeraj [EMAIL PROTECTED]

To: 'Luke' [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Wednesday, May 24, 2006 2:16 AM
Subject: RE: Query problem: UNION in subquery




Hi Luke..


Try this

SELECT ObjectId FROM


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId,
f15.Form15PatientID AS PtId FROM form15 f15
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6')


UNION


SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId,
f15.Form15PatientID  AS PtId FROM form15 f15
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6'))as abc


Cheers :)


Neeraj Black Bits

-Original Message-
From: Luke [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 24, 2006 9:36 AM
To: mysql@lists.mysql.com
Subject: Query problem: UNION in subquery

Hello!

I have a problem using UNIONs inside subqueries. I have simplified my
query to make it more readable/understandable.

The question is about the right syntax.

1.
This works fine /UNION/


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId,
f15.Form15PatientID AS PtId FROM form15 f15
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6'))


UNION


(SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId,
f15.Form15PatientID  AS PtId FROM form15 f15
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6'))


2.
This works fine too /subquery/:


SELECT ObjectId FROM


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId,
f15.Form15PatientID AS PtId FROM form15 f15
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6')) AS
SubTable1;


3.
But when I run 12 combined I get in troubles. This is a query draft,
can't come up with the right syntax:


SELECT ObjectId FROM


(SELECT f15.Form15SampleTube1RnaBarcode AS ObjectId,
f15.Form15PatientID AS PtId FROM form15 f15
WHERE f15.Form15SampleTube1RnaBarcode IN ('01D2V','01DH6'))


UNION


(SELECT f15.Form15SampleTube6RnaBarcode AS ObjectId,
f15.Form15PatientID  AS PtId FROM form15 f15
WHERE f15.Form15SampleTube6RnaBarcode IN ('01D2V','01DH6'))


I tried many combinations and got various syntax errors. Any ideas?


Thanks,
Luke



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Administrator failing to run on FC5

2006-05-24 Thread Dan Trainor

Whil Hentzen wrote:

Hi folks,

Brand new Fedora Core 5 box running MySQL 5. I can get into the monitor 
and connect, work with databases, etc. I can't get the MySQL 
Administrator to run - get lots of errors:


1. I downloaded mysql-administrator-1.1.10-1.i386.rpm.
2. As root, in /usr/bin, did
 rpm -Uivh /home/zips/mysql-administrator-1.1.10-1.i386.rpm
3. cd /usr/bin
4. as root and as regular user: # mysql-administrator

Got a slew (40? 50?) of error messages, all bgin with
mysql-administrator-bin:5787):Glib-GObject-CRITICAL **: lots of msgs

Most having to do with assertion failed.

Did I install incorrectly or in the wrong place, or is something else 
wrong?


Thanks,

Whil



Hi -

Mine runs with warnings, not errors.  it works just fine.  However, this 
is on CentOS 4.3.  I've yet to get MySQL Query Browser to work under 
CentOS 4.3, but that's a project for tonight.


Note however that MySQL Administrator and MySQL Query Browser are two 
different things.


Aside from the messages being printed, are you able to connect to the 
database and do administrative functions with MySQL Administrator?


Thanks
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql performance / ndb 5.1 performance

2006-05-24 Thread Moritz Möller
Hi Dan,

there are about 2GB free, so the net size would still be 32 GB.

The queries are really optimized, 99.9% of all queries can be satisfied
without table scans.

Well, I guess I have to give NDB a chance, I hope it will help. The only
alternative I come to is to cluster the database on application level (use
server userID%numServers), which would be a [insert favourite non-swear-word
here] lot of work ;)

Moritz


-Original Message-
From: Dan Trainor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 25, 2006 1:41 AM
To: Moritz Möller; mysql@lists.mysql.com
Subject: Re: mysql performance

Moritz Möller wrote:
 Hi list,
 
 we're running some large high-traffic mysql servers, and are currently
 reaching the limit of our machines.
 
 We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware
is
 quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
 CPU.
 The queries run very fast (I seldom see a process that's running longer
than
 a second), but there are too many of them, I guess.
 
 As far as I know, NDB keeps the whole database in memory, so with indices
 and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
 NDB :(
 
 Does someone know other solutions to this? Is NDB the only storage engine
 supporting clustering?
 
 Thanks in advantage,
 
 Moritz
 
 
 

Hi -

That's quite a large database.  I, too, have been dealing with what I 
thought was a large database for this new project.  Being 2G, it hardly 
compares to your database size.

Keep in mind, however, that a 36G ibdata file does not necessarily mean 
that you are using 36G to store data.  InnoDB documents from the MySQL 
site explain ways to compact these files, possibly shrinking the size of 
ibdata files.  Another way to get a better idea of how much data you're 
actually using is to use the 'SHOW TABLE STATUS' query from within 
MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
subtract this from the total size of the ibdata file(s).  This will give 
you a more accurate representation of how much of that ibdata file 
you're actually using.  I think.  (Someone mind correcting me if I'm way 
off here?)

NDB may not be your solution.  Even though disk-based storage is 
included with NDB in 5.1 and beyond, I'm not too sure how this will 
affect the speed of your operations.  I suppose it's worth a try, however.

Please take this advise with a grain of salt, as InnoDB is still quite 
new to me, as well.  Other things I've found to speed up large databases 
are to properly make indexes, and testing them with the EXPLAIN 
function.  This alone has let me to speed up our operations as much as 
30% in most cases.

Thanks
-dant


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql performance / ndb 5.1 performance

2006-05-24 Thread Dan Trainor

Moritz Möller wrote:

Hi Dan,

there are about 2GB free, so the net size would still be 32 GB.

The queries are really optimized, 99.9% of all queries can be satisfied
without table scans.

Well, I guess I have to give NDB a chance, I hope it will help. The only
alternative I come to is to cluster the database on application level (use
server userID%numServers), which would be a [insert favourite non-swear-word
here] lot of work ;)

Moritz


-Original Message-
From: Dan Trainor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 25, 2006 1:41 AM

To: Moritz Möller; mysql@lists.mysql.com
Subject: Re: mysql performance

Moritz Möller wrote:


Hi list,

we're running some large high-traffic mysql servers, and are currently
reaching the limit of our machines.

We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware


is


quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
CPU.
The queries run very fast (I seldom see a process that's running longer


than


a second), but there are too many of them, I guess.

As far as I know, NDB keeps the whole database in memory, so with indices
and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
NDB :(

Does someone know other solutions to this? Is NDB the only storage engine
supporting clustering?

Thanks in advantage,

Moritz






Hi -

That's quite a large database.  I, too, have been dealing with what I 
thought was a large database for this new project.  Being 2G, it hardly 
compares to your database size.


Keep in mind, however, that a 36G ibdata file does not necessarily mean 
that you are using 36G to store data.  InnoDB documents from the MySQL 
site explain ways to compact these files, possibly shrinking the size of 
ibdata files.  Another way to get a better idea of how much data you're 
actually using is to use the 'SHOW TABLE STATUS' query from within 
MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
subtract this from the total size of the ibdata file(s).  This will give 
you a more accurate representation of how much of that ibdata file 
you're actually using.  I think.  (Someone mind correcting me if I'm way 
off here?)


NDB may not be your solution.  Even though disk-based storage is 
included with NDB in 5.1 and beyond, I'm not too sure how this will 
affect the speed of your operations.  I suppose it's worth a try, however.


Please take this advise with a grain of salt, as InnoDB is still quite 
new to me, as well.  Other things I've found to speed up large databases 
are to properly make indexes, and testing them with the EXPLAIN 
function.  This alone has let me to speed up our operations as much as 
30% in most cases.


Thanks
-dant




Hi -

Well, go ahead and do that and let us know how it turns out.  There's a 
whole mailing list on cluster.


Like I said, 5.1 (don't remember specifically which version) has 
file-based storage for cluster as an option.  Good luck with that.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finally Working...wow

2006-05-24 Thread sheeri kritzer

Rich

for what?  is

SELECT host,user,password FROM mysql.user;

showing you cleartext passwords?  It shouldn't.

-Sheeri

On 5/23/06, Rich [EMAIL PROTECTED] wrote:

Hi folks.  Me again.

I finally got this all up and running under crypt of 'cleartext'.
So, even though I am going to be on the same box as the server, how
do I set up an MD5 or password entry?

MYSQLCrypt  password()
MYSQLCrypt  password

MYSQLCrypt  MD5()
MYSQLCrypt  MD5

I'm not sure if the brackets are needed or not.

So if I use password or MD5 encryption in the tables, the value in
the field is encrypted.  How does this change how I structure things?

instead of 'passwordalpha' in the password field, I have to enter
'ff08d88bab6edcf9d730a96418c05358'?  I am entering users via my own
interface, and I can't seem to get MD5 working with either MD5 or MD5().

Cheers

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I add a column only if it doesn't exist?

2006-05-24 Thread sheeri kritzer

I don't think MySQL can do thatmaybe with some kind of stored procedure

If you're using 5.0 or higher, use the INFORMATION SCHEMA to find out
if the column exists.  If not, use the show create table statement
and parse it.

Why is this a problem, though?  (just curious, I'm not sure this would
ever come up.  I don't do automatic schema changes, always do them
manually, so I'm not too sure why you'd be amissunless you're
running gobs of servers and some of them have the column and some of
them don't.but even then you could run a script that alters the
table to add the column and just let it thrown an error if it already
exists.)

-Sheeri

On 5/24/06, Daevid Vincent [EMAIL PROTECTED] wrote:

I want to do something like this:

if not exists `hotel_page_templates`.`hpt_custom_fields`
alter table `hotel_page_templates` add column `hpt_custom_fields` text after
`hpt_alternate_username`;


ÐÆ5ÏÐ


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I turn off error checking

2006-05-24 Thread sheeri kritzer

If you run a script with

mysql  script.sql

then if there's an error, the script dies.

If you run a script with

mysql -e source script.sql

Then the script will just spit out an error and move on to the next command.

Hopefully that will help.

-Sheeri

On 5/24/06, Daevid Vincent [EMAIL PROTECTED] wrote:

Semi related to this, as it appears in my searching that this is
unfortunately not a supported feature. Is there a flag or something that I
can put in my .sql file that will turn OFF any error checking, execute my
statements (so that even if one fails, the next one will be tried), then
turn it back on again at the end of the file? Sort of like how you can do
the SET FOREIGN_KEY_CHECKS=0; so something akin to SET ERROR_CHECKS=0;
or SET FORCE=1;

And, yes, I do know that there is a --force option for the importing, but I
cannot use that in my case. This is an automated script and generally I want
the SQL to fail on any errors b/c then I know the upgrade is hosed. This is
a special case where some people got a SQL upgrade (by hand) and some
didn't, hence the discrepency between the schemas.

DÆVID

 -Original Message-
 From: Daevid Vincent [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, May 24, 2006 12:13 PM
 To: mysql@lists.mysql.com
 Subject: How do I add a column only if it doesn't exist?

 I want to do something like this:

 if not exists `hotel_page_templates`.`hpt_custom_fields`
 alter table `hotel_page_templates` add column
 `hpt_custom_fields` text after
 `hpt_alternate_username`;


 ÐÆ5ÏÐ


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOAD DATA FROM MASTER stops unfinished with Query OK

2006-05-24 Thread sheeri kritzer

yeah, I'd be willing to guess that you're mostly innodb.  LOAD DATA
FROM MASTER only works for MYISAM.

http://dev.mysql.com/doc/refman/5.0/en/load-data-from-master.html

-Sheeri

On 5/24/06, Dan Trainor [EMAIL PROTECTED] wrote:

Bgs wrote:

 No ideas?

 I tried playing around with read/write timeouts (even thought the
 replication is fast), all size limits are greater than the whole
 replicated db. The last table with accesses MYD and zero size is a small
 one (a couple of dozens kBs).

 Bgs wrote:


  Greetings,

 I played around with load data from master (ldfm) and it worked fine
 in test environment. Now I want to replicate our actual db to a slave.
 When I issue the ldfm command, it starts the replication. I get Query
 OK, but only about 5% of the db is replicated. Apparently all tables
 that are on the slave in the end are exact copies of the master
 tables, but most MYD files are zero sized


 Any ideas?

 Thanks in advance
 Bgs




Hi -

Which storage engine are you using for the tables or database which
you're trying to replicate?

Thanks
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I get off this list that I do not remember joining in the first place!!!!

2006-05-24 Thread Phil Robbins





++
Phil Robbins
Auckland
New Zealand
++

_
Discover fun and games at  @  http://xtramsn.co.nz/kids


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I get off this list that I do not remember joining in the first place!!!!

2006-05-24 Thread Daniel Kasak

Phil Robbins wrote:

NOTHING

Perhaps you should read the notice at the bottom of each post that you 
receive from the list !!!


--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How do I get off this list that I do not remember joining in the first place!!!!

2006-05-24 Thread Tim Lucia
There may be a clue at the bottom of every message ;-)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Fun with Dates and Incentives.

2006-05-24 Thread Brian Menke
Wow, if I read that right, it means someone on the East coast submits
answers to a test, it's somehow adjusted to be the same as someone who
submits answers to a test from the west coast 3 hours later (time zone
wise)? I can't possibly imagine how that works, but if it does that solves
huge problems for me and I seriously owe you! I'm gonna do some more
research so I understand how this works.

 

THANKS!

 

-Brian

 

  _  

From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 24, 2006 3:56 PM
To: Brian Menke
Cc: mysql@lists.mysql.com
Subject: Re: Fun with Dates and Incentives.

 

Brian,

CURRENT_TIMESTAMP gives you time in your server's timezone. UTC_TIMESTAMP
gives GM (universal) time, so dispenses with all timezone adjustments.
Trouble is, you cannot use it as a defalt. You have to pass it as an INSERT
value.

PB

-

Brian Menke wrote: 

Peter, thanks for the detailed info. I will figure out how to get rid of the
UNIQUE key. Somehow that got added. Thanks for the catch. As far as INT for
student id goes, I'm using email because it will be unique, and offers an
easy way to track a user through the app I'm building (user name, password,
session id's etc.) but I do get what you are saying. Thanks for the
UTC_TIMESTAMP suggestion. Although, since I haven't had a lot of experience,
I don't really understand why it is better than CURRENT_TIMESTAMP. But.
that's why I asked for advice :-)

 

-Brian

 

  _  

From: Peter Brawley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 24, 2006 1:31 PM
To: Brian Menke
Cc: mysql@lists.mysql.com
Subject: Re: Fun with Dates and Incentives.

 

Brian

Re your schema, 
  --it's redundant to define PRIMARY and UNIQUE keys on the same column,
  --why not an INT student id?
  --what if two (eg married) students share an email account?
  --comparing datetimes across multiple time zones will be simpler if you  
set completed_modules.time=UTC_TIMESTAMP in each new row of that table.

That would give ...

CREATE TABLE students (
  id INT NOT NULL,-- auto_increment [simplest] or assigned by
school?
  email varchar(64) NOT NULL,
  fname varchar(32) NOT NULL,
  lname varchar(32) NOT NULL,
  role char(2) NOT NULL default '5',
  password varchar(8) NOT NULL,
  phone varchar(24) default NULL,
  reg_date date default NULL,
  PRIMARY KEY (id),
  KEY email (email)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE completed_modules (
  id INT NOT NULL,
  module_id char(2) NOT NULL default '',
  score INT NOT NULL default 0,
  time timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

To find the first 10 scores of 100 on a particular module, just ...

SELECT
  CONCAT(s.lname,', ',s.fname) AS Name,
  c.time,
  c.score
FROM students s
INNER JOIN completed_modules c USING (id)
WHERE c.module_id = 1 AND c.score = 100
ORDER BY c.time ASC
LIMIT 10;

PB





I'm hoping for some general advice on an approach for the following
scenario:
 
 
 
I have a customer who wants to put an incentive program in place for
students taking learning modules and then completing tests. The concept is
simple. Award the first 10 people who complete a test with a score of
100%... that type of thing. Students are allowed to take test more than
once. Track each time the student takes the test and show the latest score
ect. You get the idea. I have the database tables and relationships already
all set up for the tests, but it's the tracking of the dates and times that
I don't have and it got me thinking.
 
 
 
I need to track down to the day/hour/minute level. Okay, that should be easy
(I think). I'm going to need to do a lot of date/time calculations. Would it
be best just to have a default of CURRENT_TIMESTAMP set for a TIMESTAMP
field? Or, is their something else I should be using? I have limited
experience having to munge and crunch date/time info and I want to make sure
I have the flexibility to do what I need in the future.
 
 
 
The next gotcha I thought up is what about different time zones. Obviously
without this consideration, people on the East coast would have an unfair 3
hour advantage over people on the west coast. I guess I can have a time zone
field in my student table so I could derive the time difference. Any
suggestions on a good time zone approach?
 
 
 
Here are my two tables as they stand now. I'm wondering if these are set up
in a way to allow me to do all this date time crunching I'm going to need to
do in the future? Any suggestions are greatly appreciated :-)
 
 
 
 
 
CREATE TABLE `students` (
 
  `store_id` varchar(6) NOT NULL,
 
  `email` varchar(64) NOT NULL,
 
  `fname` varchar(32) NOT NULL,
 
  `lname` varchar(32) NOT NULL,
 
  `role` char(2) NOT NULL default '5',
 
  `password` varchar(8) NOT NULL,
 
  `phone` varchar(24) default NULL,
 
  `reg_date` date default NULL,
 
  PRIMARY KEY  (`email`),
 
  UNIQUE KEY `email` (`email`)
 
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
 
 
 
CREATE TABLE `completed_modules` (
 
 

RE: How do I turn off error checking

2006-05-24 Thread Daevid Vincent
As mentioned in the original email, I know about forcing it, but I can't do
that in my case.

DÆVID  

 -Original Message-
 From: sheeri kritzer [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, May 24, 2006 6:27 PM
 To: Daevid Vincent
 Cc: mysql@lists.mysql.com
 Subject: Re: How do I turn off error checking
 
 If you run a script with
 
 mysql  script.sql
 
 then if there's an error, the script dies.
 
 If you run a script with
 
 mysql -e source script.sql
 
 Then the script will just spit out an error and move on to 
 the next command.
 
 Hopefully that will help.
 
 -Sheeri
 
 On 5/24/06, Daevid Vincent [EMAIL PROTECTED] wrote:
  Semi related to this, as it appears in my searching that this is
  unfortunately not a supported feature. Is there a flag or 
 something that I
  can put in my .sql file that will turn OFF any error 
 checking, execute my
  statements (so that even if one fails, the next one will be 
 tried), then
  turn it back on again at the end of the file? Sort of like 
 how you can do
  the SET FOREIGN_KEY_CHECKS=0; so something akin to SET 
 ERROR_CHECKS=0;
  or SET FORCE=1;
 
  And, yes, I do know that there is a --force option for the 
 importing, but I
  cannot use that in my case. This is an automated script and 
 generally I want
  the SQL to fail on any errors b/c then I know the upgrade 
 is hosed. This is
  a special case where some people got a SQL upgrade (by 
 hand) and some
  didn't, hence the discrepency between the schemas.
 
  DÆVID
 
   -Original Message-
   From: Daevid Vincent [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, May 24, 2006 12:13 PM
   To: mysql@lists.mysql.com
   Subject: How do I add a column only if it doesn't exist?
  
   I want to do something like this:
  
   if not exists `hotel_page_templates`.`hpt_custom_fields`
   alter table `hotel_page_templates` add column
   `hpt_custom_fields` text after
   `hpt_alternate_username`;
  
  
   ÐÆ5ÏÐ
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
   http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]