Re: SELECT Help

2011-02-04 Thread Tompkins Neil
Hi Travis,

That query kind of gives me the desired result.  However, if is showing
me 1, 18, 11, 23, 3, 2010-11-14 17:18:17 record and not 2, 11, 10, 3, 6,
2010-12-20 22:17:13, which is when they changed teams.  Any thoughts ?

Cheers
Neil

On Thu, Feb 3, 2011 at 10:32 PM, Travis Ard travis_...@hotmail.com wrote:

 Something like this might help you find all of the times where your user_id
 switched to a different team_id:

 select team_id, user_id, min(last_changed)
 from
 (select home_team_id as team_id, home_user_id as user_id, last_changed
 from data
 union all
 select away_team_id as team_id, away_user_id as user_id, last_changed
 from data) s1
 where s1.user_id = 3
 group by team_id, user_id;

 -Travis

 -Original Message-
 From: Tompkins Neil [mailto:neil.tompk...@googlemail.com]
 Sent: Thursday, February 03, 2011 6:34 AM
 To: [MySQL]
 Subject: SELECT Help

 Hi,

 I've the following list of sample data, and need a SELECT statement to help
 me identify the point at which I've highlighted the data :

 Season, Competition, home_team_id, away_team_id, home_user_id,
 away_user_id,
 last_changed
 1, 18, 11, 23, 3, 2010-11-14 17:18:17
 1, 11, 8, 3, 82, 2010-11-14 18:37:44
 1, 20, 11, 69, 3, 2010-11-17 23:07:49
 1, 1, 11, 4, 3, 2010-11-18 19:00:26
 1, 11, 1, 3, 4, 2010-11-18 19:00:42
 1, 12, 11, 5, 3, 2010-11-19 22:49:49
 1, 11, 14, 3, 19, 2010-11-23 21:38:19
 1, 3, 11, 15, 3, 2010-11-25 22:08:23
 1, 7, 11, 66, 3, 2010-11-28 02:38:15
 2, 73, 60, 137, 3, 2010-12-08 00:22:30
 2, 60, 73, 3, 137, 2010-12-08 00:22:35
 2, 60, 37, 3, 112, 2010-12-09 20:05:44
 2, 60, 65, 3, 158, 2010-12-12 21:45:14
 2, 72, 60, 141, 3, 2010-12-13 15:38:25
 2, 60, 68, 3, 87, 2010-12-13 16:08:08
 2, 60, 45, 3, 8, 2010-12-13 22:34:40
 2, 66, 60, 140, 3, 2010-12-14 22:10:42
 2, 60, 71, 3, 142, 2010-12-16 19:48:46
 2, 60, 64, 3, 30, 2010-12-19 16:41:21
 2, 76, 60, 17, 3, 2010-12-19 19:17:04
 2, 60, 76, 3, 17, 2010-12-20 00:40:56
 *2, 11, 10, 3, 6, 2010-12-20 22:17:13*
 2, 13, 11, 104, 3, 2010-12-21 00:36:37
 2, 6, 11, 168, 3, 2010-12-29 20:20:52
 2, 11, 18, 3, 97, 2010-12-29 20:41:07
 2, 20, 11, 5, 3, 2010-12-30 21:24:58
 2, 15, 11, 163, 3, 2010-12-30 21:46:39
 2, 13, 11, 12, 3, 2010-12-30 22:33:15

 Basically, I need to find the point in which the user for
 either home_user_id or away_user_id (in this instance 3) changed teams
 for home_team_id or away_team_id - if you understand what I mean ?  Any
 ideas on how I can achieve this using MySQL ?

 Cheers
 Neil




Re: Unknown column 'users.users_id' in 'where clause'

2011-02-04 Thread Tompkins Neil
Thanks, but how can I pass the current users value from the other query ?

On Thu, Feb 3, 2011 at 4:22 PM, Simcha Younger sim...@syounger.com wrote:

 On Thu, 3 Feb 2011 13:55:36 +
 Tompkins Neil neil.tompk...@googlemail.com wrote:

  SELECT DISTINCT(away_teams_id) AS teams_id
  FROM fixtures_results
  WHERE (fixtures_results.away_users_id = *users.users_id*)

  Any ideas why I'm getting Unknown column 'users.users_id' in 'where
 clause'
  for the part of the statement that I've marked as *bold* and how I can
 over
  come this problem ?

 You have to add the users table to the subquery.
 Your subquery only has the fixtures_results table in the 'from' clause, and
 the subquery is not aware of the tables you use in the outer query.

 --
 Simcha Younger sim...@syounger.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=neil.tompk...@googlemail.com




RE: How do increase memory allocated to MySQL?

2011-02-04 Thread Jerry Schwartz
-Original Message-
From: Yannis Haralambous [mailto:yannis.haralamb...@telecom-bretagne.eu]
Sent: Thursday, February 03, 2011 10:18 PM
To: David Brian Chait
Cc: mysql@lists.mysql.com; y...@mpi-inf.mpg.de
Subject: Re: How do increase memory allocated to MySQL?

[JS] snip



the query was just

SELECT * FROM wasfoundin WHERE yakoright LIKE '%geography%'

[JS] I think it is the leading % that is causing the problem.

I don't have a ready solution. Perhaps a full-text index would help, but I'm 
not sure.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



Le 4 févr. 2011 à 11:14, David Brian Chait a écrit :

 Yannis,
  How is the data structured? Can you give us an example of the queries
that you are trying to run? Do you have indexes in place? A very inefficient
query, or poorly structured database can lead to this type of timeout issue 
on
the type of low end hardware that you are using.

 Thanks,
 David

 -Original Message-
 From: Yannis Haralambous [mailto:yannis.haralamb...@telecom-bretagne.eu]
 Sent: Thursday, February 03, 2011 5:43 PM
 To: mysql@lists.mysql.com
 Cc: y...@mpi-inf.mpg.de
 Subject: How do increase memory allocated to MySQL?

 Hi everybody,

 I have loaded a very big amount of data in my MySQL database (coming from 
 the
YAGO project):

 -rw-rw  1 yannis  admin   65  3 fév 16:07 db.opt
 -rw-rw  1 yannis  admin   6392030392  3 fév 21:35 wasfoundin.MYD
 -rw-rw  1 yannis  admin  11085793280  4 fév 04:54 wasfoundin.MYI
 -rw-rw  1 yannis  admin 8668  3 fév 16:09 wasfoundin.frm

 as you can see the MYI file is 11Gb and the MYD file 6.4Gb.

 Whenever I try to send a query through phpmyadmin, it crashes and I get
returned to the initial page of phpmyadmin.
 When I send an SQL query directly, it works, but it takes many minutes (a
simple SELECT will take about 10 minutes).

 I'm using the latest version of MAMP under MacOS X (on a MacBook Pro 2.6 
 GHz
Intel Core 2 with 4Gb of RAM).

 What can I do to make phpmyadmin work? Is it a chance to gain more speed 
 when
interacting through regular SQL queries, or is it hopeless?
 Where can I find more information about using MySQL with such big tables?

 Thanks in advance

 --
 ---
 Yannis Haralambous
 Directeur d'études
 ADRESSE TEMPORAIRE :
 University of Aizu
 Aizu-Wakamatsu, Fukushima-ken  965-8580, Japon ADRESSE PERMANENTE :
 Institut Télécom, Télécom Bretagne
 Département Informatique
 UMR CNRS 3192 Lab-STICC
 Technopôle Brest Iroise
 CS 83818, 29238 Brest Cedex 3, France
 Tel: +33 2 29 00 14 27
 Fax: +33 2 29 00 12 82
 Email: yannis.haralamb...@telecom-bretagne.eu
 Internet: http://omega.enstb.org/yannis
 ICBM address: 48°21'31.57N 4°34'16.76W
 Twitter: y_haralambous
 ---
 ...pour distinguer l'extérieur d'un aquarium, mieux vaut n'être pas poisson

 ...the ball I threw while playing in the park has not yet reached the 
 ground

 Es gab eine Zeit, wo ich nur ungern über Schubert sprechen, nur Nächtens 
 den
Bäumen und Sternen von ihm vorerzählen mögen.


--
---
Yannis Haralambous
Directeur d'études
ADRESSE TEMPORAIRE :
University of Aizu
Aizu-Wakamatsu, Fukushima-ken  965-8580, Japon
ADRESSE PERMANENTE :
Institut Télécom, Télécom Bretagne
Département Informatique
UMR CNRS 3192 Lab-STICC
Technopôle Brest Iroise
CS 83818, 29238 Brest Cedex 3, France
Tel: +33 2 29 00 14 27
Fax: +33 2 29 00 12 82
Email: yannis.haralamb...@telecom-bretagne.eu
Internet: http://omega.enstb.org/yannis
ICBM address: 48°21'31.57N 4°34'16.76W
Twitter: y_haralambous
---
...pour distinguer l'extérieur d'un aquarium,
mieux vaut n'être pas poisson

...the ball I threw while playing in the park
has not yet reached the ground

Es gab eine Zeit, wo ich nur ungern über Schubert sprechen,
nur Nächtens den Bäumen und Sternen von ihm vorerzählen mögen.





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Table/select problem...

2011-02-04 Thread Andy Wallace

Greetings, all...

I'm having an issue with a SELECT in our system. We have an event log table,
with about 9 million rows in it. Inserts happen with some pretty high frequency,
and these selects happen periodically.  The event_log table is MyISAM, the
rest of the tables are InnoDB.

What's happening is that, periodically, when this select gets run, the whole
damn thing locks up, and that pretty much shuts us down (since many things
insert events into the table, and the table gets locked, so all the inserts
hang).

The statement and the explain for it are below. the enduser table has about a
million rows in it, the event_type table 35 rows.  The weird part is that, if
I strip down the query to use no joins, the explain wants to return about 17,000
rows, but the query itself does the table locking thing.

Should we perhaps change the event log to InnoDB to avoid table locking? Might
the table itself be corrupt in some way? Any thoughts?

thanks,
andy




EXPLAIN
SELECT EL.event_log_id,
  EL.event_time,
  DATE_FORMAT(EL.event_time, '%c-%d-%Y %H:%i:%s') as 'time_formatted',
  ET.event_type_id,
  ET.description,
  EL.csr_name,
  EL.enduser_acnt,
  EL.csr_name,
  EL.referer,
  EL.mls_id,
  EL.mls_no,
  EL.ss_id,
  EL.details,
  E.fname,
  E.lname,
  E.email,
  E.phone1
FROM event_log EL
JOIN event_type ET ON EL.event_type_id = ET.event_type_id
JOIN enduser E ON EL.enduser_acnt = E.enduser_acnt
WHERE EL.acnt = 'AR238156'
  AND EL.enduser_acnt != ''
  AND EL.event_type_id = 'EndUserLogin'
  AND event_time BETWEEN DATE_SUB(CURDATE(), INTERVAL '7' DAY) AND NOW()
ORDER BY EL.event_time DESC



*** 1. row ***
   id: 1
  select_type: SIMPLE
table: ET
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 92
  ref: const
 rows: 1
Extra: Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: EL
 type: index_merge
possible_keys: agent,enduser,event_log_ibfk_1
  key: agent,event_log_ibfk_1
  key_len: 62,92
  ref: NULL
 rows: 1757
Extra: Using intersect(agent,event_log_ibfk_1); Using where
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: E
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: idx_acnt.EL.enduser_acnt
 rows: 1
Extra: Using where
3 rows in set (0.00 sec)


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



writing to disk at a configurable time

2011-02-04 Thread Vinubalaji Gopal
Hi all,
  I wanted to know if Mysql allows me to configure it such that the
writes to disk happen at a configurable time or after the buffers have
enough data. Say there are 10 clients connecting to mysql and each one
is inserting a row - I want to only write to disk when the buffer has
enough data (say all the 10 rows are written to the main memory) or at
some configurable interval. I know there could be  a data loss in this
case (if a crash happens before the buffer is written to disk) but I
am willing to take that risk in the application.

-- 
Vinu

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Table/select problem...

2011-02-04 Thread Steve Musumeche
I had this same issue a while back and solved it by writing my events to 
a disk-based file and periodically importing them into the event log 
MyISAM table.  This way, even if your select statements lock the table, 
it won't affect the performance of your application.  Of course, this 
may require some rewriting of your application code, depending on how 
events are logged.


You could avoid the locking with InnoDB, but I did not choose that 
solution because MyISAM seems like a better fit for a logging situation, 
and they can later be used in Merge tables.  I wonder if any others have 
used InnoDB for large logging tables and what the performance has been?


Steve Musumeche
CIO, Internet Retail Connection
st...@internetretailconnection.com
1-800-248-1987 ext 802


On 2/4/2011 11:29 AM, Andy Wallace wrote:

Greetings, all...

I'm having an issue with a SELECT in our system. We have an event log 
table,
with about 9 million rows in it. Inserts happen with some pretty high 
frequency,
and these selects happen periodically.  The event_log table is MyISAM, 
the

rest of the tables are InnoDB.

What's happening is that, periodically, when this select gets run, the 
whole
damn thing locks up, and that pretty much shuts us down (since many 
things
insert events into the table, and the table gets locked, so all the 
inserts

hang).

The statement and the explain for it are below. the enduser table has 
about a
million rows in it, the event_type table 35 rows.  The weird part is 
that, if
I strip down the query to use no joins, the explain wants to return 
about 17,000

rows, but the query itself does the table locking thing.

Should we perhaps change the event log to InnoDB to avoid table 
locking? Might

the table itself be corrupt in some way? Any thoughts?

thanks,
andy




EXPLAIN
SELECT EL.event_log_id,
  EL.event_time,
  DATE_FORMAT(EL.event_time, '%c-%d-%Y %H:%i:%s') as 
'time_formatted',

  ET.event_type_id,
  ET.description,
  EL.csr_name,
  EL.enduser_acnt,
  EL.csr_name,
  EL.referer,
  EL.mls_id,
  EL.mls_no,
  EL.ss_id,
  EL.details,
  E.fname,
  E.lname,
  E.email,
  E.phone1
FROM event_log EL
JOIN event_type ET ON EL.event_type_id = ET.event_type_id
JOIN enduser E ON EL.enduser_acnt = E.enduser_acnt
WHERE EL.acnt = 'AR238156'
  AND EL.enduser_acnt != ''
  AND EL.event_type_id = 'EndUserLogin'
  AND event_time BETWEEN DATE_SUB(CURDATE(), INTERVAL '7' DAY) AND NOW()
ORDER BY EL.event_time DESC



*** 1. row ***
   id: 1
  select_type: SIMPLE
table: ET
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 92
  ref: const
 rows: 1
Extra: Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: EL
 type: index_merge
possible_keys: agent,enduser,event_log_ibfk_1
  key: agent,event_log_ibfk_1
  key_len: 62,92
  ref: NULL
 rows: 1757
Extra: Using intersect(agent,event_log_ibfk_1); Using where
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: E
 type: eq_ref
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: idx_acnt.EL.enduser_acnt
 rows: 1
Extra: Using where
3 rows in set (0.00 sec)




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: writing to disk at a configurable time

2011-02-04 Thread Johan De Meersman
InnoDB definitely has some parameters you can play with, but I've never
actually done so myself.


On Fri, Feb 4, 2011 at 8:09 PM, Vinubalaji Gopal vinubal...@gmail.comwrote:

 Hi all,
  I wanted to know if Mysql allows me to configure it such that the
 writes to disk happen at a configurable time or after the buffers have
 enough data. Say there are 10 clients connecting to mysql and each one
 is inserting a row - I want to only write to disk when the buffer has
 enough data (say all the 10 rows are written to the main memory) or at
 some configurable interval. I know there could be  a data loss in this
 case (if a crash happens before the buffer is written to disk) but I
 am willing to take that risk in the application.

 --
 Vinu

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: Table/select problem...

2011-02-04 Thread Johan De Meersman
Do you delete data from the table ?

MyISAM will only grant a write lock when there are no locks on the table -
including implicit read locks. That may be your problem.

There is a single situation when concurrent reads and writes are possible on
MyISAM, however: when your table has no holes in the data. At that point,
selects happen on the existing data, while the insert queue is handled
(sequentially) at the same time.

If that is indeed your problem, you may fix the table using OPTIMIZE
TABLE.

Two other options might be:

   - set the variable concurrent_insert to 2 - this will allow concurrent
   inserts at the end of the table even when there are holes. Downside is that
   freed space (from deletes) is not reused.
   - use INSERT DELAYED. Code returns immediately, but you have no way of
   knowing wether or not any given insert succeeded.


If you delete data, but only relatively old data, you might also benefit
from partitioning the table: I'm not sure about this, but it seems
reasonable that concurrent updates would be possible on partitions with no
holes. Should try this sometime.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: writing to disk at a configurable time

2011-02-04 Thread Rolando Edwards
innodb_flush_log_at_trx_commit is the parameter to tweek

The following is an excerpt from MySQL 5.0 Certification Study Guide (ISBN 
0-672-32812-7)
http://www.amazon.com/MySQL-5-0-Certification-Study-Guide/dp/0672328127/ref=sr_1_1?ie=UTF8qid=1296851778sr=8-1
Chapter 29, Pages 429,430

InnoDB tries to flush the log approximately once a second in any case, but the 
innodb_flush_log_at_trx_commit option cat be set to determine how long writing 
and flushing occurs in addition. The setting of this option is directly related 
to the ACID durability and the performance as follows:

If you set  innodb_flush_log_at_trx_commit to 1, changes are written form the 
log buffer and the log file is flushed to disk for each commit. This 
guaranteesthat the changes will not be lost even in the event of a crash. This 
is the safest setting, and is also the required setting if you need ACID 
durability. However, this setting also produces slowest performance.

A setting of 0 causes the log file to be written and flushed to disk 
approximately once a second, but not after each commit. Os a bust system, this 
can reduce log-related disk activity significantly, but in the event of a crash 
can result in a loss of about a second's worth of committed changes.

A setting of 2 causes the log buffer to be written to the log file after each 
commit, but file writes are flushed to disk approximately once a second. This 
is somewhat slower than a setting of 0. However, the committed changes will not 
be lost if it is only the MySQL server that crashes and not the operating 
system or server host: The machine continues to run, so the changes written to 
the log file are in the filesystem cache and eventually will flushed normally.

If The tradeoff controlled by the innodb_flush_log_at_trx_commit setting 
therefore is between durability and performance. If ACID durability is 
required, a setting of 1 is necessary. If a slight risk to durability is 
acceptable to achieve better performance, a value of 0 or 2 may be used.

Rolando A. Edwards
MySQL DBA (SCMDBA)

155 Avenue of the Americas, Fifth Floor
New York, NY 10013
212-625-5307 (Work)
201-660-3221 (Cell)
AIM  Skype : RolandoLogicWorx
redwa...@logicworks.net
http://www.linkedin.com/in/rolandoedwards


-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Friday, February 04, 2011 2:57 PM
To: Vinubalaji Gopal
Cc: mysql@lists.mysql.com
Subject: Re: writing to disk at a configurable time

InnoDB definitely has some parameters you can play with, but I've never
actually done so myself.


On Fri, Feb 4, 2011 at 8:09 PM, Vinubalaji Gopal vinubal...@gmail.comwrote:

 Hi all,
  I wanted to know if Mysql allows me to configure it such that the
 writes to disk happen at a configurable time or after the buffers have
 enough data. Say there are 10 clients connecting to mysql and each one
 is inserting a row - I want to only write to disk when the buffer has
 enough data (say all the 10 rows are written to the main memory) or at
 some configurable interval. I know there could be  a data loss in this
 case (if a crash happens before the buffer is written to disk) but I
 am willing to take that risk in the application.

 --
 Vinu

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Question about database value checking

2011-02-04 Thread Andy Wallace

So, a problem popped up today that has caused us no end of hair-pulling, and
it brought to mind a similar issue that I found very, well, wrong.

If you have a table defined:

CREATE TABLE `tester_table` (
  `acnt`varchar(20) NOT NULL DEFAULT '',
  `method`  varchar(10) NOT NULL DEFAULT '',
  `card_num`varchar(100) NOT NULL DEFAULT '',
  PRIMARY KEY (`acnt`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

And try this:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', NULL);

That fails. and gives a nice error. But:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', 'A12345');

UPDATE tester_table set card_num = NULL WHERE acnt = 'test1';

That succeeds, but it puts an empty string into the card_num column. I
would have thought (hoped) that an error would be thrown in that case as
well.  On a similar note, the following table:

CREATE TABLE `tester_table2` (
  `acnt` varchar(20) NOT NULL,
  `required` enum('NO','DETAILS','RESULTS') NOT NULL,
  PRIMARY KEY (`acnt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

Lets you insert:

INSERT INTO tester_table2 (acnt, required) VALUES ('A123456', 'THIS IS NOT 
REAL');

Though it just puts an empty string into the required column.

Is there a setting for mysql to return errors in these cases? It seems silly to 
set
up an enum column, or a not null column, and not have the possible values 
enforced?

thanks,
andy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about database value checking

2011-02-04 Thread Peter He

Are you using the strict SQL mode?  Check your my.cnf file.
 
Peter
 
 Date: Fri, 4 Feb 2011 14:08:01 -0800
 From: awall...@ihouseweb.com
 To: mysql@lists.mysql.com
 Subject: Question about database value checking
 
 So, a problem popped up today that has caused us no end of hair-pulling, and
 it brought to mind a similar issue that I found very, well, wrong.
 
 If you have a table defined:
 
 CREATE TABLE `tester_table` (
 `acnt` varchar(20) NOT NULL DEFAULT '',
 `method` varchar(10) NOT NULL DEFAULT '',
 `card_num` varchar(100) NOT NULL DEFAULT '',
 PRIMARY KEY (`acnt`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1
 
 And try this:
 
 INSERT INTO tester_table (acnt, method, card_num)
 VALUES ('test1', 'push', NULL);
 
 That fails. and gives a nice error. But:
 
 INSERT INTO tester_table (acnt, method, card_num)
 VALUES ('test1', 'push', 'A12345');
 
 UPDATE tester_table set card_num = NULL WHERE acnt = 'test1';
 
 That succeeds, but it puts an empty string into the card_num column. I
 would have thought (hoped) that an error would be thrown in that case as
 well. On a similar note, the following table:
 
 CREATE TABLE `tester_table2` (
 `acnt` varchar(20) NOT NULL,
 `required` enum('NO','DETAILS','RESULTS') NOT NULL,
 PRIMARY KEY (`acnt`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 Lets you insert:
 
 INSERT INTO tester_table2 (acnt, required) VALUES ('A123456', 'THIS IS NOT 
 REAL');
 
 Though it just puts an empty string into the required column.
 
 Is there a setting for mysql to return errors in these cases? It seems silly 
 to set
 up an enum column, or a not null column, and not have the possible values 
 enforced?
 
 thanks,
 andy
 
 
 -- 
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 There are two ways to build software:
 Make it so simple that there are obviously no bugs,
 or make it so complex that there are no obvious bugs.
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=phe1...@hotmail.com
 
  

Re: How do increase memory allocated to MySQL?

2011-02-04 Thread Kevin Spencer
2011/2/3 Yannis Haralambous yannis.haralamb...@telecom-bretagne.eu:

 what am I doing wrong?

 the query was just

 SELECT * FROM wasfoundin WHERE yakoright LIKE '%geography%'

When you use a leading wildcard symbol, MySQL will do a full table
scan regardless of any indexes you've created.  If you've got a MyISAM
table, I recommend a FULLTEXT index.

http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html

--
Kevin.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about database value checking

2011-02-04 Thread Andy Wallace

Thanks Peter, exactly what I was hoping for!
andy

On 2/4/11 3:11 PM, Peter He wrote:


Are you using the strict SQL mode?  Check your my.cnf file.

Peter


Date: Fri, 4 Feb 2011 14:08:01 -0800
From: awall...@ihouseweb.com
To: mysql@lists.mysql.com
Subject: Question about database value checking

So, a problem popped up today that has caused us no end of hair-pulling, and
it brought to mind a similar issue that I found very, well, wrong.

If you have a table defined:

CREATE TABLE `tester_table` (
`acnt` varchar(20) NOT NULL DEFAULT '',
`method` varchar(10) NOT NULL DEFAULT '',
`card_num` varchar(100) NOT NULL DEFAULT '',
PRIMARY KEY (`acnt`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

And try this:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', NULL);

That fails. and gives a nice error. But:

INSERT INTO tester_table (acnt, method, card_num)
VALUES ('test1', 'push', 'A12345');

UPDATE tester_table set card_num = NULL WHERE acnt = 'test1';

That succeeds, but it puts an empty string into the card_num column. I
would have thought (hoped) that an error would be thrown in that case as
well. On a similar note, the following table:

CREATE TABLE `tester_table2` (
`acnt` varchar(20) NOT NULL,
`required` enum('NO','DETAILS','RESULTS') NOT NULL,
PRIMARY KEY (`acnt`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

Lets you insert:

INSERT INTO tester_table2 (acnt, required) VALUES ('A123456', 'THIS IS NOT 
REAL');

Though it just puts an empty string into the required column.

Is there a setting for mysql to return errors in these cases? It seems silly to 
set
up an enum column, or a not null column, and not have the possible values 
enforced?

thanks,
andy


--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql?unsub=phe1...@hotmail.com





--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Table/select problem...

2011-02-04 Thread Travis Ard
What columns do you have indexed on your event_log table?  Can you post the
output from SHOW CREATE TABLE? How long does the query run for?

-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com] 
Sent: Friday, February 04, 2011 10:29 AM
To: mysql list
Subject: Table/select problem...

Greetings, all...

I'm having an issue with a SELECT in our system. We have an event log table,
with about 9 million rows in it. Inserts happen with some pretty high
frequency,
and these selects happen periodically.  The event_log table is MyISAM, the
rest of the tables are InnoDB.

What's happening is that, periodically, when this select gets run, the whole
damn thing locks up, and that pretty much shuts us down (since many things
insert events into the table, and the table gets locked, so all the inserts
hang).

The statement and the explain for it are below. the enduser table has about
a
million rows in it, the event_type table 35 rows.  The weird part is that,
if
I strip down the query to use no joins, the explain wants to return about
17,000
rows, but the query itself does the table locking thing.

Should we perhaps change the event log to InnoDB to avoid table locking?
Might
the table itself be corrupt in some way? Any thoughts?

thanks,
andy




EXPLAIN
SELECT EL.event_log_id,
   EL.event_time,
   DATE_FORMAT(EL.event_time, '%c-%d-%Y %H:%i:%s') as 'time_formatted',
   ET.event_type_id,
   ET.description,
   EL.csr_name,
   EL.enduser_acnt,
   EL.csr_name,
   EL.referer,
   EL.mls_id,
   EL.mls_no,
   EL.ss_id,
   EL.details,
   E.fname,
   E.lname,
   E.email,
   E.phone1
FROM event_log EL
JOIN event_type ET ON EL.event_type_id = ET.event_type_id
JOIN enduser E ON EL.enduser_acnt = E.enduser_acnt
WHERE EL.acnt = 'AR238156'
   AND EL.enduser_acnt != ''
   AND EL.event_type_id = 'EndUserLogin'
   AND event_time BETWEEN DATE_SUB(CURDATE(), INTERVAL '7' DAY) AND NOW()
ORDER BY EL.event_time DESC



*** 1. row ***
id: 1
   select_type: SIMPLE
 table: ET
  type: const
possible_keys: PRIMARY
   key: PRIMARY
   key_len: 92
   ref: const
  rows: 1
 Extra: Using filesort
*** 2. row ***
id: 1
   select_type: SIMPLE
 table: EL
  type: index_merge
possible_keys: agent,enduser,event_log_ibfk_1
   key: agent,event_log_ibfk_1
   key_len: 62,92
   ref: NULL
  rows: 1757
 Extra: Using intersect(agent,event_log_ibfk_1); Using where
*** 3. row ***
id: 1
   select_type: SIMPLE
 table: E
  type: eq_ref
possible_keys: PRIMARY
   key: PRIMARY
   key_len: 4
   ref: idx_acnt.EL.enduser_acnt
  rows: 1
 Extra: Using where
3 rows in set (0.00 sec)


-- 
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=travis_...@hotmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: writing to disk at a configurable time

2011-02-04 Thread Vinubalaji Gopal
Thank you. I will try the different options and see how it performs.

On Fri, Feb 4, 2011 at 12:38 PM, Rolando Edwards
redwa...@logicworks.net wrote:
 innodb_flush_log_at_trx_commit is the parameter to tweek

 The following is an excerpt from MySQL 5.0 Certification Study Guide (ISBN 
 0-672-32812-7)
 http://www.amazon.com/MySQL-5-0-Certification-Study-Guide/dp/0672328127/ref=sr_1_1?ie=UTF8qid=1296851778sr=8-1
 Chapter 29, Pages 429,430

 InnoDB tries to flush the log approximately once a second in any case, but 
 the innodb_flush_log_at_trx_commit option cat be set to determine how long 
 writing and flushing occurs in addition. The setting of this option is 
 directly related to the ACID durability and the performance as follows:

 If you set  innodb_flush_log_at_trx_commit to 1, changes are written form the 
 log buffer and the log file is flushed to disk for each commit. This 
 guaranteesthat the changes will not be lost even in the event of a crash. 
 This is the safest setting, and is also the required setting if you need ACID 
 durability. However, this setting also produces slowest performance.

 A setting of 0 causes the log file to be written and flushed to disk 
 approximately once a second, but not after each commit. Os a bust system, 
 this can reduce log-related disk activity significantly, but in the event of 
 a crash can result in a loss of about a second's worth of committed changes.

 A setting of 2 causes the log buffer to be written to the log file after each 
 commit, but file writes are flushed to disk approximately once a second. This 
 is somewhat slower than a setting of 0. However, the committed changes will 
 not be lost if it is only the MySQL server that crashes and not the operating 
 system or server host: The machine continues to run, so the changes written 
 to the log file are in the filesystem cache and eventually will flushed 
 normally.

 If The tradeoff controlled by the innodb_flush_log_at_trx_commit setting 
 therefore is between durability and performance. If ACID durability is 
 required, a setting of 1 is necessary. If a slight risk to durability is 
 acceptable to achieve better performance, a value of 0 or 2 may be used.

 Rolando A. Edwards
 MySQL DBA (SCMDBA)

 155 Avenue of the Americas, Fifth Floor
 New York, NY 10013
 212-625-5307 (Work)
 201-660-3221 (Cell)
 AIM  Skype : RolandoLogicWorx
 redwa...@logicworks.net
 http://www.linkedin.com/in/rolandoedwards


 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
 Meersman
 Sent: Friday, February 04, 2011 2:57 PM
 To: Vinubalaji Gopal
 Cc: mysql@lists.mysql.com
 Subject: Re: writing to disk at a configurable time

 InnoDB definitely has some parameters you can play with, but I've never
 actually done so myself.


 On Fri, Feb 4, 2011 at 8:09 PM, Vinubalaji Gopal vinubal...@gmail.comwrote:

 Hi all,
  I wanted to know if Mysql allows me to configure it such that the
 writes to disk happen at a configurable time or after the buffers have
 enough data. Say there are 10 clients connecting to mysql and each one
 is inserting a row - I want to only write to disk when the buffer has
 enough data (say all the 10 rows are written to the main memory) or at
 some configurable interval. I know there could be  a data loss in this
 case (if a crash happens before the buffer is written to disk) but I
 am willing to take that risk in the application.

 --
 Vinu

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
Vinu

In a world without fences who needs Gates?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How do increase memory allocated to MySQL?

2011-02-04 Thread Feris Thia
Hi Kevin,

On Sat, Feb 5, 2011 at 6:00 AM, Kevin Spencer ke...@kevinspencer.orgwrote:

 When you use a leading wildcard symbol, MySQL will do a full table
 scan regardless of any indexes you've created.


Is it also apply to regex lookup ?

Regards,

Feris