RE: Connecting to queries into one

2006-02-27 Thread John McCaskey
SELECT consumers.id FROM consumers, cases 
WHERE 
consumers.id=cases.consumers_id 
AND consumers.date_of_birth = ? 
AND cases.last_name = ?
AND cases.first_name = ?


John A. McCaskey

-Original Message-
From: Courtney Braafhart [mailto:[EMAIL PROTECTED] 
Sent: Monday, February 27, 2006 10:31 AM
To: mysql@lists.mysql.com
Subject: Connecting to queries into one

MY GOAL

To collect any consumers.id WHERE date of birth, last
name and  
first name matches  what was entered by the user.  The trick is that  
date of birth lives in  the consumer table and last name and
first  
name lives in the cases table (which can be joined to consumers by  
consumers.id and cases.consumer_id).

Can anyone think of way to do this in one mysql
statement instead  
of doing a loop?

I am thinking it would work something like this:

SELECT consumers.id FROM consumers WHERE date_of_birth =
? AND  
consumer.id = (SELECT consumer_id FROM cases WHERE last_name = ?

AND full_first_name = ? )

Is there a way to form the above statement in MYSQL?

MY HOPE

That there is a really obvious solution to this question
and that I  
am simply suffering from a case of the Mondays!


Thanks in advance!

Courtney Braafhart


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Huge number of tables with InnoDB

2006-01-13 Thread John McCaskey
Hi everyone,

 

I'm running MySQL 4.0.18 on Debian with a 2.6 linux kernel using ext3 as
the underlying filesystem for the database storage.

 

I currently have some InnoDB tables with the following structure:

 

Log_20060101 {

Monitor_id medium int,

Timestamp timestamp,

Avg float,

PRIMARY KEY Monitor_id, Timestamp

}

 

We partition these tables by date as you can see as they grow very large
and they get to be slow to insert and query to over time.  We have the
idea to change the partitioning so the tables are as follows:

 

Log_[monitor_id] {

Timestamp,

Avg float,

PRIMARY KEY Timestamp

}

 

This seems to have several key advantages:

 

1)   Reduced disk space usage 

2)   Easier querying of data across time (but not across individual
id's, it turns out that doesn't ever really happen in our data usage
anyway though)

3)   Smaller tables, resulting in faster reads/writes, also smaller
data volume hopefully also resulting in faster reads/writes due to less
disk io neccesary

 

However, in our actual testing the 'faster writes' expectation is
getting shot down.  With 20,000 unique monitor id's and 8928 unique
timestamps inserting to the old set of tables (20051201-20051231 in this
case) is taking me about 4 hours 20 minutes.  Inserting to the 20,000
new tables (Log_0-Log_2) is taking about 10 hours.  I expected this
to be much faster as I hoped finding the right table for an insert would
be a Hash type lookup taking linear time, while inserting into the large
tree structure in the old tables which have a much higher volume of rows
would be logarithmic time.  Where did I go wrong?  My only real thought
so far is the disk subsystem of the OS being slow with large numbers of
files, but I thought it wouldn't matter for InnoDB as the data storage
is all one file.  Does anyone know what would cause the inserts to be so
much slower?  

 

John A. McCaskey

Software Development Engineer

Klir Technologies, Inc.

[EMAIL PROTECTED]

206.902.2027

 



RE: Huge number of tables with InnoDB

2006-01-13 Thread John McCaskey
Alec,

Thanks for your response; however, I'm not convinced you are correct :)
Let me try to explain in more detail my reasoning below.

Why should the two level lookup be Log A + Log B?  Looking up a
tablename is always an equilvalency check, not a range check.  So there
is no advantage to having a tree style structure for the lookup.  The
structure I would expect to be used is a hash table which would then
make the two level lookup 1 + Log B, rather than Log(A*B) initially.

As to index's, we ALWAYS include the monitor id in the query and query
one monitor id at a time.  We generally also include a time range.  The
reason I have structured the index monitor id first then timestamp on
the old tables is that getting down to just one monitor id eliminates
more rows then getting down to one timestamp.  Second, you cannot have
two separate index's and have both used in a single query when using
InnoDB.  MySQL will choose the best index and use it.  Using this index
as the primary key instead of a separate index improves disk space usage
with InnoDB as the primary key doesn't require a separate index
structure but will order the rows correctly in the actual storage. 

As to events 'close to one another' we have a set interval of 5 minutes
for the timestamps, so that's not a concern, we do want the monitor_id,
timestamp pairs to be unique.

We insert many millions of these rows per day in our production systems,
we also require historical storage for up to one year.  Keeping that
volume of rows in one table causes both inserts and selects to come to a
crawl.  Unless I'm missing something here not partitioning the data is
not a remotely feasible option.

Comments?

John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 13, 2006 9:41 AM
To: John McCaskey
Cc: MySQL
Subject: Re: Huge number of tables with InnoDB

To reply to this, I think we have to understand why you have chosen to 
split the tables at all. It seems to me that this, by introducing a 
two-level lookup, is certain to be slower than any possible single table

lookup. Generally, Log A + log B is bound to be larger than log (A*B). 

It appears that you are querying prediminantly by time. In this case,
your 
index *must* start with the timestamp, not the monitor ID. I would
suggest 
that you need an index on timestamp, and possible one on monitor ID -
but 
not if, as you say, you never query by monitor ID at all. Do you need a 
PRIMARY KEY at all? In what way will your system break if there happen
to 
be two entries with the same timestamp and monitor ID? Presumably this 
will reflect two events very close together: Wouldn't you rather store 
that fact rather than lose it? 

Generally, I would query your decision to have multiple tables by date
to 
whatever. In my experience, whenever I have introduced such concepts
into 
my early designs, they have disappeareed later into a better design. It 
looks to mee as if you are using a tool optimied to do fast searches on 
large databases, then crippling its ability to optimise.

I would expect the use of thousands of tables effectively to disable 
MySQL's caching capability, which is one of the biggest performance 
boosters.

Alec






John McCaskey [EMAIL PROTECTED] 
13/01/2006 17:20

To
MySQL mysql@lists.mysql.com
cc

Subject
Huge number of tables with InnoDB






Hi everyone,

 

I'm running MySQL 4.0.18 on Debian with a 2.6 linux kernel using ext3 as
the underlying filesystem for the database storage.

 

I currently have some InnoDB tables with the following structure:

 

Log_20060101 {

Monitor_id medium int,

Timestamp timestamp,

Avg float,

PRIMARY KEY Monitor_id, Timestamp

}

 

We partition these tables by date as you can see as they grow very large
and they get to be slow to insert and query to over time.  We have the
idea to change the partitioning so the tables are as follows:

 

Log_[monitor_id] {

Timestamp,

Avg float,

PRIMARY KEY Timestamp

}

 

This seems to have several key advantages:

 

1)   Reduced disk space usage 

2)   Easier querying of data across time (but not across individual
id's, it turns out that doesn't ever really happen in our data usage
anyway though)

3)   Smaller tables, resulting in faster reads/writes, also smaller
data volume hopefully also resulting in faster reads/writes due to less
disk io neccesary

 

However, in our actual testing the 'faster writes' expectation is
getting shot down.  With 20,000 unique monitor id's and 8928 unique
timestamps inserting to the old set of tables (20051201-20051231 in this
case) is taking me about 4 hours 20 minutes.  Inserting to the 20,000
new tables (Log_0-Log_2) is taking about 10 hours.  I expected this
to be much faster as I hoped finding the right table for an insert would
be a Hash type

RE: LASSO TIPS for MYSQL: 3.4 ROLL YOUR OWN

2005-10-31 Thread John McCaskey
Yes! Please stop spamming us; we would signup for a Lasso list if we cared to 
get these tips not a MySQL list.

John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-Original Message-
From: Logan, David (SST - Adelaide) [mailto:[EMAIL PROTECTED] 
Sent: Monday, October 31, 2005 3:31 PM
To: m i l e s; MySQL
Subject: RE: LASSO TIPS for MYSQL: 3.4 ROLL YOUR OWN

Hi,

I'm not really interested in Lasso and can't see why I am receiving your 
emails. Could you please take the entire list off your mailing list?

Thanks and Regards

David Logan
Database Administrator
HP Managed Services
148 Frome Street,
Adelaide 5000
Australia

+61 8 8408 4273 - Work
+61 417 268 665 - Mobile
+61 8 8408 4259 - Fax


-Original Message-
From: m i l e s [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, 1 November 2005 9:58 AM
To: MySQL
Subject: LASSO TIPS for MYSQL: 3.4 ROLL YOUR OWN

- Hi and Welcome to -


LASSO TIPS FOR MYSQL: 3.4


I'm your host, M i l e s.

First and foremost, a good place for you to start with Lasso is the  
following 5 things:

The FIRST LASSO TIPS FOR NEWBIES
- http://www.listsearch.com/lassotalk.lasso?id=143312
The LAST LASSO TIPS FOR NEWBIES
- http://www.listsearch.com/lassotalk.lasso?id=154859
10 LASSO RESOURCES
- http://www.listsearch.com/lassotalk.lasso?id=143018
THE LASSO RESOURCES ADDENDUM
- http://www.listsearch.com/lassotalk.lasso?id=143417
OMNIPILOT RESOURCE LIST
- http://www.omnipilot.com/Resources+for+Beginners.2225.lasso
http://www.omnipilot.com/Tip+of+the+Week.1768.lasso
http://www.omnipilot.com/Hosting+Providers.1744.lasso
http://www.omnipilot.com/Frequently+Asked+Questions.1791.lasso


TODAYS TIP:
  ROLL YOUR OWN LINKS!


 Last week I got a call from (what I thought was) a new client  
asking me to take a look at a Lasso 3 site that they've had online  
since before time was time, wanting an estimate of just what it would  
take to upgrade this site to LP8, and to MySQL (as they were dumping  
FMP as a backend, 'T SLOW!', their words not mine).  They sent me  
the site, and as I went through the pages, flashes of rememberence  
came through first in bits and pieces, then as I got to one page,  
BAM!  It was here that I unearthed my own comments I'd written 5  
years ago!  I had developed this site for a 3rd party but was never  
told who the end client was, now I knew.  Going through the site I  
came across a page that I had spent a LONG time on.  It was a way to  
display a series of links dynamically outside an inline.  I remember  
that I couldn't do what I really wanted to do, which was google style  
links, but I did find a 'work around' to doing what could not be done  
at the time with some really fancy inlines, lists (the forerunner of  
today's arrays), and few well placed token values.  Looking at the  
code and now having LP8 in my back pocket I could now do what I  
really wanted to do (and the client actually requested in their spec  
document).  And that's where today's article comes from, that and the  
following quote (courtesy of OmniPilot's own Fletcher Sandbeck): The  
[Link_...] tags only work with the -Search or -FindAll actions.  The  
links don't show up for -SQL actions.  The workaround is to create  
your own link tags by checking the found count and calculating your  
desired -SkipRecords value manually. So today Im going to show you a  
method to rolling your own links!

 PART ONE: ROLLING ON A RIVER

 The act of creating a series of dynamically generated links is  
not an easy task, or has the viewpoint of not being that easy to  
create.  However it can be done.  But before we get there let's take  
a gander at the LINK_ tag series.  This series of tags (and all the  
link_ series for that matter), all require being inside an inline  
container in order to produce results.   Like so:

 [inline: -database='mydb', -table='mytb',  
'somefield'='searchvalue', -skiprecords=(skiprecords_value), -search]

 etc...some HTML...

 td[link_firstgroup]FIRST[/link_firstgroup]/td
 td[link_prevgroup]PREV[/link_prevgroup]/td
 td[link_nextgroup]NEXT[/link_nextgroup]/td
 td[link_lastgroup]LAST[/link_lastgroup]/td

 [/inline]

 This series of tags makes it the act of creating links to the  
rest of your search results really simple and not only that these  
tags have a series of subtags that makes them secure to use, so that  
you don't display your parameters via the URL. If you run the  
LINK_XXX tags above without the subtags, you'll end up with something  
like this:

http://yourdomain.com/page.lasso?-Search=Action-Table=mytb- 
MaxRecords=10-SkipRecords=10-Database=mydb-KeyField=id_key- 
SortField=somefield-SortOrder=ascendinganotherfield=somevalue

  ++
TO SEE THE 

RE: Replication fails with file not found error - but file is there

2005-10-31 Thread John McCaskey
Jon,
 
I can't offer any great insight into your problem I'm afraid.  But I'd 
encourage you not to add the error to the ignore list as the UPDATE won't 
actually get replicated then and your databases will be out of sync.  Maybe 
this is a filesystem problem at the OS level? What OS and filesystem is this 
being used on?
 
John



From: news on behalf of Jon Drukman
Sent: Mon 10/31/2005 5:28 PM
To: mysql@lists.mysql.com
Subject: Replication fails with file not found error - but file is there



Master and slaves are both Mysql 4.1.14 standard.  There are six slaves,
and this error kept happening on #2 and #4, but then it stopped.  I
thought it was all gone for good but today it happened on #3.

The symptom is:  Replication stops with the following error:
Error 'Can't find file: './gspot/product.frm' (errno: 24)' on query.
Default database: 'gspot'. Query: 'UPDATE product SET rdate='2006-06-30'
WHERE id=928302 LIMIT 1'


The actual file and query change from occurence to occurence.  In all
cases, the file is actually present on the disk.  Logging in to the
slave and executing SLAVE START causes replication to resume normally.

Here's the full output from SHOW SLAVE STATUS:

Slave_IO_State = Waiting for master to send event
Master_Host = c10-gs-stage1.cnet.com
Master_User = replica
Master_Port = 3306
Connect_Retry = 60
Master_Log_File = c10-gs-stage1-bin.01
Read_Master_Log_Pos = 218146109
Relay_Log_File = c17-gs-db-slave3-relay-bin.02
Relay_Log_Pos = 217866316
Relay_Master_Log_File = c10-gs-stage1-bin.01
Slave_IO_Running = Yes
Slave_SQL_Running = No
Replicate_Do_DB = gspot,gfaqs
Replicate_Ignore_DB =
Replicate_Do_Table =
Replicate_Ignore_Table =
Replicate_Wild_Do_Table =
Replicate_Wild_Ignore_Table =
Last_Errno = 1017
Last_Error = Error 'Can't find file: './gspot/product.frm' (errno: 24)'
on query. Default database: 'gspot'. Query: 'UPDATE product SET
rdate='2006-06-30' WHERE id=928302 LIMIT 1'
Skip_Counter = 0
Exec_Master_Log_Pos = 217866265
Relay_Log_Space = 218146160
Until_Condition = None
Until_Log_File =
Until_Log_Pos = 0
Master_SSL_Allowed = No
Master_SSL_CA_File =
Master_SSL_CA_Path =
Master_SSL_Cert =
Master_SSL_Cipher =
Master_SSL_Key =
Seconds_Behind_Master =


I could just add error 1017 to the ignore list, I guess, but this gives
me the heebie jeebies.  Any ideas?

-jsd-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





RE: How thread-safe is mysql_real_connect()?

2005-10-11 Thread John McCaskey
Wow, thats good to know.  Thanks Jeremiah.  It is a little strange that the 
documentation doesn't mention that this behavior is different under windows and 
leads one to believe that calling mysql_thread_init/end is still neccesary.  
 
John



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tue 10/11/2005 6:52 AM
To: Jeremiah Gowdy
Cc: John McCaskey; mysql@lists.mysql.com; [EMAIL PROTECTED]
Subject: Re: How thread-safe is mysql_real_connect()?




Jeremiah Gowdy [EMAIL PROTECTED] wrote on 10/11/2005 03:08:40 AM:

 The Windows DLL is thread safe.  You do not have to call my_init() 
 and my_thread_init() because Windows DLLs receive events when they 
 are attached to a new process and when they are attached to a new 
 thread in a process.  This is one of the nicer features of Windows 
 shared libraries.  Other than that, you don't have to do anything 
 special.  I am a heavy user of libmysql under Win32.  You simply 
 mysql_init() your MYSQL struct, and then mysql_real_connect() and 
 you're ready to mysql_query().
 
 You should not call my_init() or my_thread_init() as the previous 
 poster suggested.  This could result in memory leaks.
 
 
 From libmysql/dll.c
 
 BOOL APIENTRY LibMain(HANDLE hInst,DWORD ul_reason_being_called,
 LPVOID lpReserved)
 {
   switch (ul_reason_being_called) {
 case DLL_PROCESS_ATTACH: /* case of libentry call in win 3.x */
   if (!inited++)
   {
 s_hModule=hInst;
 libmysql_init();
 main_thread=GetCurrentThreadId();
   }
 break;
 case DLL_THREAD_ATTACH:
   threads++;
   my_thread_init();
 break;
 case DLL_PROCESS_DETACH: /* case of wep call in win 3.x */
   if (!--inited) /* Safety */
   {
 /* my_thread_init() */ /* This may give extra safety */
 my_end(0);
   }
 break;
 case DLL_THREAD_DETACH:
   /* Main thread will free by my_end() */
   threads--;
   if (main_thread != GetCurrentThreadId())
   my_thread_end();
 break;
 default:
 break;
   } /* switch */
   return TRUE;
   UNREFERENCED_PARAMETER(lpReserved);
 } /* LibMain */
 
 - Original Message - 
 From: John McCaskey [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; mysql@lists.mysql.com
 Sent: Friday, October 07, 2005 10:31 AM
 Subject: RE: How thread-safe is mysql_real_connect()?
 
 
 Sean,
 
 First let me thank you for all the great posts and info I've seen you
 put on this list for others.
 
 I've been working in C with MySQL in a very multithreaded environment
 for several years and think I can explain the thread safety issues
 clearly.  Rather than try to respond point by point to your question I'm
 going to give a summary and if that doesn't help please respond again
 and I'll answer specific questions.
 
 First, mysql is in fact pretty much threadsafe when using the _r
 library.  You definitely do need to use the _r library and not the
 normal one as the SIGPIPE discussion applies to both, the non _r library
 has additional safety issues surrounding mysql_real_connect() and should
 not be used.  On windows you don't really need to do anything here I
 believe because the Windows binaries are by default compiled to be
 thread-safe. (from
 http://dev.mysql.com/doc/mysql/en/threaded-clients.html). To validate
 this in your client code you should in the main() function close to
 startup use mysql_thread_safe() to verify your linked in version is
 thread safe.  
 
 The next thing you need to do is initialize mysql globally before
 creating any threads that will use it.  Simply call my_init(); in your
 main thread.  After this you can go ahead and create any threads.  In
 the threads you create you need to call mysql_thread_init(); and when
 you end the thread mysql_thread_end();  in between these calls you can
 just use mysql as normal and the mysql_real_connect function will be
 thread safe, you do not need to perform any locking of your own to make
 only one call at a time or anything along those lines.
 
 Here is some pseudo code of what you need to do:
 
 int main(int argc, char **argv) {
 
   if(!mysql_thread_safe()) {
 fprintf(stderr, Not Thread safe!!!);
 return 1;
   }
 
   my_init();
 
   // your regular init code
 
   // create the threads that will use mysql
   CreateThread();
 
   
 }
 
 void *mysql_thread(void *arg) {
 mysql_thread_init();
 
 
 //regular mysql code and whatever else here
   //use mysql_real_connect and mysql_real_query 
   //and whatever without worrying about thread safety
 
 
 
   mysql_thread_end();
 }   
 
 
 
 
 John A. McCaskey
 Software Development Engineer
 Klir Technologies, Inc.
 [EMAIL PROTECTED]
 206.902.2027
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 Sent: Friday, October 07, 2005 9:01 AM
 To: [EMAIL PROTECTED]; mysql@lists.mysql.com
 Subject: How thread-safe is mysql_real_connect()?
 
snip
 
 Shawn Green
 Database Administrator

RE: How thread-safe is mysql_real_connect()?

2005-10-07 Thread John McCaskey
Sean,

First let me thank you for all the great posts and info I've seen you
put on this list for others.

I've been working in C with MySQL in a very multithreaded environment
for several years and think I can explain the thread safety issues
clearly.  Rather than try to respond point by point to your question I'm
going to give a summary and if that doesn't help please respond again
and I'll answer specific questions.

First, mysql is in fact pretty much threadsafe when using the _r
library.  You definitely do need to use the _r library and not the
normal one as the SIGPIPE discussion applies to both, the non _r library
has additional safety issues surrounding mysql_real_connect() and should
not be used.  On windows you don't really need to do anything here I
believe because the Windows binaries are by default compiled to be
thread-safe. (from
http://dev.mysql.com/doc/mysql/en/threaded-clients.html). To validate
this in your client code you should in the main() function close to
startup use mysql_thread_safe() to verify your linked in version is
thread safe.  

The next thing you need to do is initialize mysql globally before
creating any threads that will use it.  Simply call my_init(); in your
main thread.  After this you can go ahead and create any threads.  In
the threads you create you need to call mysql_thread_init(); and when
you end the thread mysql_thread_end();  in between these calls you can
just use mysql as normal and the mysql_real_connect function will be
thread safe, you do not need to perform any locking of your own to make
only one call at a time or anything along those lines.

Here is some pseudo code of what you need to do:

int main(int argc, char **argv) {

  if(!mysql_thread_safe()) {
fprintf(stderr, Not Thread safe!!!);
return 1;
  }

  my_init();

  // your regular init code

  // create the threads that will use mysql
  CreateThread();

  
}

void *mysql_thread(void *arg) {
mysql_thread_init();


//regular mysql code and whatever else here
  //use mysql_real_connect and mysql_real_query 
  //and whatever without worrying about thread safety



  mysql_thread_end();
}   




John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 07, 2005 9:01 AM
To: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: How thread-safe is mysql_real_connect()?

(please excuse the double post but I wanted to reach the two audiences I

thought could help the best)

This is a question about the interpreting the documentation in the
manual 
for the C API.
I searched the list archives (all lists) going back 365 days for the
terms 
(unquoted): mysql_real_connect thread (I also looked for 
alternatives:mysql_real_connect threaded, mysql_real_connect multi 
threaded, etc.). I searched on Google Groups for: mysql_real_connect 
thread and found a few interesting hits. However, I am still not 100% 
clear on how to interpret some of the information on this page: 
http://dev.mysql.com/doc/mysql/en/threaded-clients.html

I do a lot of MySQL administration and development using mostly the CLI 
and a few other tools but I am writing a multithreaded client to
automate 
certain background processing and I need a bit of advice. According to
the 
page in question the function mysql_real_connect() is not thread-safe.

Does that simply mean that I cannot call that function from more than
one 
thread at a time or does that mean that the connection created by one
call 
to the function will be visible to the other threads or what? Just how
not 
thread-safe is it? 

Each thread will have it's own  MYSQL structure and I will need to use
two 
different connections per thread at the same time (am I going to need a 
separate call to mysql_init() for each connection?). I know how to wrap 
all of my calls to mysql_real_connect()  in a critical section or
protect 
them with a mutex if that's all I need to do . If it's not that simple
and 
I do need to compile and link against another library (as the page 
suggests - sort of) can someone help me to configure my Microsoft Visual

C++ .NET (v7) to do it? I said sort-of because the page also says that

the binary distributions (which I am working with ) already contain the 
threadsafe library so I wonder if I need to rebuild anything or not.
How 
can I tell?

I am an experienced but not well-seasoned C++ developer (not using c#
for 
this). I know the language and can write and debug code just fine (I can

make stand-alone apps and DLLs all day); it's just that some of the 
complier/linker options and settings that confound me and I am having 
trouble translating the advice on the page into specifics I can work
with 
for my environment. 

I know I probably left out some simple pieces of information, just let
me 
know and I will respond ASAP. Please remember to CC: both lists on all 
responses.


Re: To multi thread or NOT to multi thread?

2005-09-28 Thread John McCaskey
Hi,

On 9/28/05, Lefteris Tsintjelis [EMAIL PROTECTED] wrote:

 John McCaskey wrote:
  Hello again,
 
  I modified your threading code to use a thread pool. Here are my
 results:

 Hello,

 I modified the thread pool a bit to get rid of that lock ASAP. It is safe
 to get rid of that lock right after mysql_store_results.


I wasnt quite sure about the safety of the error output that occured later
on, but it is true for sure in the case of no errors occuring releasing
sooner is safe. Good optimization.

http://dev.mysql.com/doc/mysql/en/threaded-clients.html

 The conclusion I draw from all this is that if you have a very fast
 connection to your db its best NOT to use threading at all if your queries
 are simple and you expect the results fast.


I dont think that is neccesarily true, it will also depend on what type of
queries. But for your situation it does appear true.

Here is another interesting thing though, if multi threading is used, I
 got the best results by using 2 connections, I am also using dual CPUs
 (i%CONNECTIONS). I guess everyone can draw its own conclusions depending
 on his needs but for me, and since I only need to do *few, simple, local*
 queries, looks like I can live without the multi thread over head for
 now. The more complex are the queries, slower the connections and higher
 the response time is, (the slower the answer you get in general) then
 multi threading could be a winner.

 -- Clearly the winner ---


snip

Given your connections are local and you have 2 cpus Id be interested to see
4 connections, 4 threads, 2500 queries per thread. Opening many more threads
than you have cpus is always going to be a bad idea unless there is some
sort of blocking io operation (like waiting for the network, or a disk read)
or something occuring in each thread so that alot of the time spent is just
idle. However, with just 2-4 threads and connections you should be able to
execute the queries in parallell as you actually have the hardware to do so,
while avoiding alot of the thread overhead that you saw with 100. You will
likely still go with single threaded for your situation, but it would be
very interesting to me if we could see the scenario I just outlined posted
to compare to your last posted results.

John


Re: To multi thread or NOT to multi thread?

2005-09-27 Thread John McCaskey
Hi,

I think I can shed a bit of light on the topic. There are several reasons
why your multithreaded code is not a good example and would be slower.

1) locking/unlocking mutexes of course does add *some* overhead
2) you have a single database connection and are passing it around between
threads thus serializing the actual queries, as such the queries are not
multithreaded at all and your code is kind of a silly use of threading --
this combined with #1 above naturally does make your threaded code slower
3) show status may not be a good example of threaded performance server
side -- A better test would be a variety of different insert queries or
such, or changes to different tables. Depending on your table type some
locking may occur on inserts that can serialize them if you are inserting
the same data or data on the same data page in the database, more disparate
queries however will actually execute in parallel and should see a speed
increase.

John

On 9/27/05, Lefteris Tsintjelis [EMAIL PROTECTED] wrote:

 Hi,

 What makes me wonder is that the same test, with the code
 stripped down, to my surprise, is significantly faster that the
 multi threaded one, no matter how many times I run the tests. I am
 including the code for both tests I run.
 Since I couldn't find a good example of mutex locking the
 following one is something that worked for me. However, I am not
 sure if its as optimized as it should be, so I would appreciate an
 expert's opinion about this. Is this a good example of mutex
 locking? Are there any other better ways for this? Is this an OS
 or MySQL issue? I am currently running 4.1.14 on a FreeBSD5 box.

 Timings:

 Thread Safe ON
 0.14 real 0.01 user 0.10 sys
 Thread Safe OFF
 0.08 real 0.00 user 0.06 sys

 Thnx,

 Lefteris Tsinjelis

 /*** MULTI THREADED EXAMPLE CODE ***/
 / -lmysqlclient_r -lpthread /
 #include stdarg.h
 #include stdio.h
 #include stdlib.h
 #include string.h
 #include pthread.h
 #include mysql.h

 #define MAX 100

 typedef struct db_donfig {
 char host[16];
 char user[16];
 char pass[16];
 char name[16];
 unsigned int port;
 char *socket;
 } db_config;

 typedef struct db_mutex {
 MYSQL *db;
 pthread_mutex_t lock;
 } db_mutex;

 db_mutex dbm;

 void *db_pthread(void *arg);
 static void db_die(MYSQL *db, char *fmt, ...);
 MYSQL *db_connect(MYSQL *db, db_config *dbc);
 void db_disconnect(MYSQL *db);
 long db_query(MYSQL *db, const char *query);

 int main(int argc, char **argv) {
 int i;
 pthread_t pthread[MAX];
 db_config dbc;

 strcpy(dbc.host,localhost);
 strcpy(dbc.user,root);
 strcpy(dbc.pass,);
 strcpy(dbc.name http://dbc.name,);
 dbc.port = 3306;
 dbc.socket = NULL;

 dbm.db = db_connect(dbm.db, dbc);
 pthread_mutex_init(dbm.lock, pthread_mutexattr_default);

 if (!mysql_thread_safe())
 fprintf(stderr, Thread Safe OFF\n);
 else
 fprintf(stderr, Thread Safe ON\n);

 pthread_setconcurrency(4);
 // fire up the threads
 for (i = 0; i  MAX; ++i)
 pthread_create(pthread[i], NULL, db_pthread, NULL);
 // wait for threads to finish
 for (i = 0; i  MAX; ++i)
 pthread_join(pthread[i], 0);

 pthread_mutex_destroy(dbm.lock);
 db_disconnect(dbm.db);

 exit(EXIT_SUCCESS);
 }

 void *db_pthread(void *arg) {
 db_query(dbm.db, show status);
 pthread_exit((void *)0);
 }

 static void db_die(MYSQL *db, char *fmt, ...) {
 va_list ap;
 va_start(ap, fmt);
 vfprintf(stderr, fmt, ap);
 va_end(ap);
 (void)putc('\n', stderr);
 db_disconnect(db);
 exit(EXIT_FAILURE);
 }

 MYSQL *db_connect(MYSQL *db, db_config *dbc) {
 if ( !(db = mysql_init(db)) )
 db_die(db, mysql_init failed: %s, mysql_error(db));
 else {
 if ( !mysql_real_connect(db, dbc-host, dbc-user, dbc-pass, dbc-name,
 dbc-port, dbc-socket, 0) )
 db_die(db, mysql_real_connect failed: %s, mysql_error(db));
 }
 return (db);
 }

 void db_disconnect(MYSQL *db) {
 if (db)
 mysql_close(db);
 }

 long db_query(MYSQL *db, const char *query) {
 long ret;

 pthread_mutex_lock(dbm.lock);
 ret = mysql_query(db, query);
 // if query failed, exit with db error
 if (ret != 0) {
 pthread_mutex_unlock(dbm.lock);
 db_die(db, mysql_query failed: %s, mysql_error(db));
 }
 // if query succeeded
 else {
 MYSQL_RES *res;

 res = mysql_store_result(db);
 pthread_mutex_unlock(dbm.lock);
 // if there are rows
 if (res) {
 MYSQL_ROW row, end_row;
 unsigned int num_fields;

 num_fields = mysql_num_fields(res);
 while ( (row = mysql_fetch_row(res)) )
 for (end_row = row + num_fields; row  end_row; ++row)
 ++ret;
 mysql_free_result(res);
 }
 // if there are no rows, should there be any ?
 else {
 // if query was not a SELECT, return with affected rows
 if(mysql_field_count(db) == 0)
 ret = mysql_affected_rows(db);
 // there should be data, exit with db error
 else
 db_die(db, mysql_store_result failed: %s, mysql_error(db));
 }
 }
 return (ret);
 }
 /** NO MULTI THREADED EXAMPLE CODE **/
 /** -lmysqlclient ***/
 #include stdarg.h
 #include 

Re: To multi thread or NOT to multi thread?

2005-09-27 Thread John McCaskey
Hi again,

On 9/27/05, Lefteris Tsintjelis [EMAIL PROTECTED] wrote:

 John McCaskey wrote:
  Hi,
 
  I think I can shed a bit of light on the topic. There are several
 reasons
  why your multithreaded code is not a good example and would be slower.
 
  1) locking/unlocking mutexes of course does add *some* overhead

 *lots* would probably be a better choice here! :)


Well, it certainly depends on how much lock contention there is. In a well
designed multi-threaded app you want to minimize the lock contention so that
it will be minimal. In your example lock contention is very heavy, so in
that case *lots* may be a better word!

 2) you have a single database connection and are passing it around between
  threads thus serializing the actual queries, as such the queries are not
  multithreaded at all and your code is kind of a silly use of threading
 --
  this combined with #1 above naturally does make your threaded code
 slower

 But this is what I had in mind though. I wanted to be that way instead of
 opening multi threaded connections but, from the looks of it, I guess you
 are right and its not really worth the trouble. I have read somewhere that
 opening a few connections can be slower but I guess that was probably
 wrong.


Ok, I can understand why you would have it in mind, but it's going to be a
bad idea. Opening multiple connections will of course add some overhead, but
its a different kind. What you have to ask yourself is whether the queries
you are running in seperate threads are capable of being run in parallel
server side. If so then opening multiple threads will be a performance win.
If the queries you are running will get serialized on the server anyway then
the extra overhead of the additional connections will slow you down. The
current implementation you have however will always be slower than doing it
without threading as thats esentially what happens with your lock contention
anyway.

 3) show status may not be a good example of threaded performance server
  side -- A better test would be a variety of different insert queries or
  such, or changes to different tables. Depending on your table type some
  locking may occur on inserts that can serialize them if you are
 inserting
  the same data or data on the same data page in the database, more
 disparate
  queries however will actually execute in parallel and should see a speed
  increase.

 I have tried with other queries, some random ones as well, and the results
 where very similar.


Yep, your example would have the same results with any query due to the
above mentioned serialization and lock contention you have. However, if you
fixed that and opened one connection per thread (or a pool of say 5
connections that would be shared by 5-n threads) then the type of query and
how it executes server side is going to have a definate impact.

Good luck!

Thnx,

 Lefteris




Re: To multi thread or NOT to multi thread?

2005-09-27 Thread John McCaskey
Hello again,

I modified your threading code to use a thread pool. Here are my results:

pooled-threading.c:
[EMAIL PROTECTED]:~$ gcc -lmysqlclient_r -lpthread pooled-threading.c
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.068s
user 0m0.041s
sys 0m0.097s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.088s
user 0m0.036s
sys 0m0.098s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.107s
user 0m0.036s
sys 0m0.100s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.068s
user 0m0.043s
sys 0m0.102s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.067s
user 0m0.044s
sys 0m0.088s
[EMAIL PROTECTED]:~$

poor-threading.c (your original threading):
[EMAIL PROTECTED]:~$ gcc -lmysqlclient_r -lpthread poor-threading.c
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.118s
user 0m0.026s
sys 0m0.069s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.110s
user 0m0.018s
sys 0m0.049s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.110s
user 0m0.029s
sys 0m0.050s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m0.109s
user 0m0.029s
sys 0m0.054s
[EMAIL PROTECTED]:~$

no-threading.c (your original as well):
[EMAIL PROTECTED]:~$ gcc -lmysqlclient no-threading.c
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m0.096s
user 0m0.023s
sys 0m0.032s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m0.095s
user 0m0.012s
sys 0m0.038s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m0.095s
user 0m0.019s
sys 0m0.028s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m0.094s
user 0m0.015s
sys 0m0.034s
[EMAIL PROTECTED]:~$

I ran these on linux 2.6, my db server is not localhost but another server
on the same network. As you can see the pooled threading (I had 25
connections in the pool) was the fastest as far as real-time. Part of this
might be because it utilizes network bandwith better. It did however use
more actual cpu time than the single threaded implementation, but usually
what you really care about is real-time anyway. Apart from that I'd say that
testing with 100 thread each doing one query is silly as you incur the
thread creation/initialization overhead once per query. A better test is to
have 100 threads do 100 queries in a row or something, vs a single thread
doing 1 queries. Here are my results for doing that with the same
implementations (I dropped your threading implementation as I think we've
established its not the winner):

pooled-threading2.c:
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m3.380s
user 0m2.487s
sys 0m5.761s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m3.373s
user 0m2.602s
sys 0m5.720s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m3.406s
user 0m2.503s
sys 0m5.670s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe ON

real 0m3.403s
user 0m2.472s
sys 0m5.698s

no-threading2.c (yours modified to do 100*100 instead of just 100):
[EMAIL PROTECTED]:~$ gcc -lmysqlclient no-threading2.c
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m9.085s
user 0m1.404s
sys 0m3.377s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m8.961s
user 0m1.436s
sys 0m3.313s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m8.937s
user 0m1.461s
sys 0m3.253s
[EMAIL PROTECTED]:~$ time ./a.out
Thread Safe OFF

real 0m8.977s
user 0m1.419s
sys 0m3.291s
[EMAIL PROTECTED]:~$

As you can see the same differences get more exaggerated here, there is CPU
overhead to do threading, but the real-time does decrease by using it.

Here is the code for pooled-threading2.c:
/*** MULTI THREADED EXAMPLE CODE ***/
/ -lmysqlclient_r -lpthread /
#include stdarg.h
#include stdio.h
#include stdlib.h
#include string.h
#include pthread.h
#include mysql/mysql.h

#define MAX 100
#define CONNECTIONS 25

typedef struct db_donfig {
char host[16];
char user[16];
char pass[16];
char name[16];
unsigned int port;
char *socket;
} db_config;

typedef struct db_mutex {
MYSQL *db;
pthread_mutex_t lock;
} db_mutex;

db_mutex dbm[CONNECTIONS];

void *db_pthread(void *arg);
static void db_die(MYSQL *db, char *fmt, ...);
MYSQL *db_connect(MYSQL *db, db_config *dbc);
void db_disconnect(MYSQL *db);
long db_query(MYSQL *db, pthread_mutex_t *lock, const char *query);

int main(int argc, char **argv) {
int i;
pthread_t pthread[MAX];
db_config dbc;

my_init();

strcpy(dbc.host,devdb01);
strcpy(dbc.user,scopeuser);
strcpy(dbc.pass,gosonicsalpha);
strcpy(dbc.name http://dbc.name,);
dbc.port = 3306;
dbc.socket = NULL;

for(i=0; iCONNECTIONS; ++i) {
dbm[i].db = db_connect(dbm[i].db, dbc);
pthread_mutex_init(dbm[i].lock, NULL);
}

if (!mysql_thread_safe())
fprintf(stderr, Thread Safe OFF\n);
else
fprintf(stderr, Thread Safe ON\n);

// pthread_setconcurrency(4);
// fire up the threads
for (i = 0; i  MAX; ++i)
pthread_create(pthread[i], NULL, db_pthread, (void *)(i%CONNECTIONS));
// wait for threads to finish
for (i = 0; i  MAX; ++i)
pthread_join(pthread[i], 

SET query with inproper AND doesn't generate error?

2005-09-21 Thread John McCaskey
Ok,

 

So I had another developer come to me today complaining mysql wouldn't
set a column to NULL.  I figured out it was because instead of comma
delimitating his fields to SET he was delimiting with AND, however mysql
took this query and didn't generate any error.  I'm assuming this is
like == vs = in c++, somehow mysql evaluated it as a Boolean logic
statement that turned into a valid query.  I don't have the time to
think about it too much, but I thought it would be interesting to hear
how it was valid and why it didn't generate an error (or maybe its
actually a bug and should be an error?).

 

Here is the query:

 

UPDATE dashboard_tab_user SET dashboard_tab_account_id = NULL AND
dashboard_tab_account_server_id = NULL WHERE dashboard_tab_user_id = 194
AND dashboard_tab_user_server_id = 99;

 

Here is the table schema:

CREATE TABLE `dashboard_tab_user` (

  `dashboard_tab_user_id` mediumint(8) unsigned NOT NULL auto_increment,

  `dashboard_tab_user_server_id` tinyint(3) unsigned NOT NULL default
'0',

  `dashboard_tab_account_id` mediumint(8) unsigned default '0',

  `dashboard_tab_account_server_id` tinyint(3) unsigned default '0',

  `user_id` mediumint(8) unsigned NOT NULL default '0',

  `user_server_id` tinyint(3) unsigned NOT NULL default '0',

  `dashboard_tab_user_name` char(18) NOT NULL default '',

  `snap` enum('on','off') NOT NULL default 'off',

  `creation_timestamp` timestamp(14) NOT NULL,

  `cols` tinyint(3) NOT NULL default '2',

  `rows` tinyint(3) NOT NULL default '2',

  `active` enum('on','off') NOT NULL default 'on',

  PRIMARY KEY  (`dashboard_tab_user_id`,`dashboard_tab_user_server_id`),

  KEY `user_id` (`user_id`,`user_server_id`),

  CONSTRAINT `dashboard_tab_user_ibfk_1` FOREIGN KEY (`user_id`,
`user_server_id`) REFERENCES `user` (`user_id`, `user_server_id`) ON
DELETE CASCADE

) TYPE=InnoDB 

 

The query would run and set dashboard_tab_account_id to NULL correctly,
but would not modify dashboard_tab_account_server_id in anyway.

 

John A. McCaskey

[EMAIL PROTECTED]

 



SET FOREIGN_KEY_CHECKS=0 being ignored

2005-07-11 Thread John McCaskey
Hey,
 
I have an application using the C API that is doing a REPLACE command
into an innodb table that has other tables with cascading deletes
relying on it's entries.  Rather than use an UPDATE/Check
affected/Insert/Check success/repeat method we have wrapped the REPLACE
query in a SET FOREIGN_KEY_CHECKS=0; then after SET
FOREIGN_KEY_CHECKS=1; pair of commands.  This is working great in our
dev and test environments but its been discovered that on our production
servers it is apparently having no effect and the cascading deletes are
occurring anyway.
 
So, the first thing I thought was 'something must be wrong with the
permissions' but I've been unable to find any discrepancies and the
manual doesn't seem to indicate you even need any special permissions to
execute the set command.  Has anyone else experienced anything similar?
Does anyone have any ideas what environmental differences could cause
the SET FOREIGN_KEY_CHECKS command to be ignored?  I'm at my wits end
here... any suggestions appreciated.
 
John A. McCaskey
 


RE: SET FOREIGN_KEY_CHECKS=0 being ignored

2005-07-11 Thread John McCaskey
Oh, I should also mention we have binary logging on and I verified by
looking at the binary log that the commands are being excuted and logged
with an error code of 0.  So the obvious thought of the codes broken and
not running them is unfortunately not the problem.  It seems to be some
sort of legitimate mysql setup error on our part or a bug in mysql.

John A. McCaskey

-Original Message-
From: John McCaskey [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 11, 2005 9:46 AM
To: mysql@lists.mysql.com
Subject: SET FOREIGN_KEY_CHECKS=0 being ignored

Hey,
 
I have an application using the C API that is doing a REPLACE command
into an innodb table that has other tables with cascading deletes
relying on it's entries.  Rather than use an UPDATE/Check
affected/Insert/Check success/repeat method we have wrapped the REPLACE
query in a SET FOREIGN_KEY_CHECKS=0; then after SET
FOREIGN_KEY_CHECKS=1; pair of commands.  This is working great in our
dev and test environments but its been discovered that on our production
servers it is apparently having no effect and the cascading deletes are
occurring anyway.
 
So, the first thing I thought was 'something must be wrong with the
permissions' but I've been unable to find any discrepancies and the
manual doesn't seem to indicate you even need any special permissions to
execute the set command.  Has anyone else experienced anything similar?
Does anyone have any ideas what environmental differences could cause
the SET FOREIGN_KEY_CHECKS command to be ignored?  I'm at my wits end
here... any suggestions appreciated.
 
John A. McCaskey
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SET FOREIGN_KEY_CHECKS=0 being ignored

2005-07-11 Thread John McCaskey
Ok, So I recreated a brand new user in our production server and it now
works correctly.  However, I still have no idea why the old user did not
work.  Here is the permission info for the old user:

mysql show grants for 'scopeuser'@'10.254.%';
+---
--+
| Grants for [EMAIL PROTECTED]
|
+---
--+
| GRANT ALL PRIVILEGES ON *.* TO 'scopeuser'@'10.254.%' IDENTIFIED BY
PASSWORD '2326f23b5ff9232' |
+---
--+
1 row in set (0.00 sec)

mysql

So it looks like this is some sort of bug with the user being corrupted
somehow and permissions checks causing the SET FOREIGN_KEY_CHECKS=0 to
not work but still return success?  Seems like a mysql bug then right? 

John A. McCaskey

-Original Message-
From: John McCaskey 
Sent: Monday, July 11, 2005 9:51 AM
To: John McCaskey; mysql@lists.mysql.com
Subject: RE: SET FOREIGN_KEY_CHECKS=0 being ignored

Oh, I should also mention we have binary logging on and I verified by
looking at the binary log that the commands are being excuted and logged
with an error code of 0.  So the obvious thought of the codes broken and
not running them is unfortunately not the problem.  It seems to be some
sort of legitimate mysql setup error on our part or a bug in mysql.

John A. McCaskey

-Original Message-
From: John McCaskey [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 11, 2005 9:46 AM
To: mysql@lists.mysql.com
Subject: SET FOREIGN_KEY_CHECKS=0 being ignored

Hey,
 
I have an application using the C API that is doing a REPLACE command
into an innodb table that has other tables with cascading deletes
relying on it's entries.  Rather than use an UPDATE/Check
affected/Insert/Check success/repeat method we have wrapped the REPLACE
query in a SET FOREIGN_KEY_CHECKS=0; then after SET
FOREIGN_KEY_CHECKS=1; pair of commands.  This is working great in our
dev and test environments but its been discovered that on our production
servers it is apparently having no effect and the cascading deletes are
occurring anyway.
 
So, the first thing I thought was 'something must be wrong with the
permissions' but I've been unable to find any discrepancies and the
manual doesn't seem to indicate you even need any special permissions to
execute the set command.  Has anyone else experienced anything similar?
Does anyone have any ideas what environmental differences could cause
the SET FOREIGN_KEY_CHECKS command to be ignored?  I'm at my wits end
here... any suggestions appreciated.
 
John A. McCaskey
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SET FOREIGN_KEY_CHECKS=0 being ignored

2005-07-11 Thread John McCaskey
See comments inline below... 

However also note that it now turns out after more testing that the new
user is having the same issue some of the time.  Sometimes the replace
does not cascade other times it does.  Every single time in the binary
log it properly lists the SET FOREIGN_KEY_CONSTRAINTS as succeeding
right before the replace though.

John A. McCaskey

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 11, 2005 11:42 AM
To: John McCaskey
Cc: mysql@lists.mysql.com
Subject: Re: SET FOREIGN_KEY_CHECKS=0 being ignored

Some additional info would probably be helpful:

What version of mysql are you running?  

4.0.18

On what platform?

Debian Linux (2.4 kernel)

For the sake of comparison, what does SHOW GRANTS say for your brand
new user?

+---
-+
| Grants for [EMAIL PROTECTED]
|
+---
-+
| GRANT USAGE ON *.* TO 'proxyuser'@'10.%' IDENTIFIED BY PASSWORD
'12345273123400f' |
| GRANT ALL PRIVILEGES ON `IPS_config`.* TO 'proxyuser'@'10.%'
|
| GRANT ALL PRIVILEGES ON `IPS_data`.* TO 'proxyuser'@'10.%'
|
+---
-+
3 rows in set (0.00 sec)

Finally, did you 'SELECT CURRENT_USER()' to verify that you were logged
in as 
'scopeuser'@'10.254.%' when it didn't work?

These are being called from a c program using the c_api, so no.  However
I do use show processlist to see the connections and can verify there
that they are connected as the correct user.

Michael

John McCaskey wrote:

 Ok, So I recreated a brand new user in our production server and it
now
 works correctly.  However, I still have no idea why the old user did
not
 work.  Here is the permission info for the old user:
 
 mysql show grants for 'scopeuser'@'10.254.%';

+---
 --+
 | Grants for [EMAIL PROTECTED]
 |

+---
 --+
 | GRANT ALL PRIVILEGES ON *.* TO 'scopeuser'@'10.254.%' IDENTIFIED BY
 PASSWORD '2326f23b5ff9232' |

+---
 --+
 1 row in set (0.00 sec)
 
 mysql
 
 So it looks like this is some sort of bug with the user being
corrupted
 somehow and permissions checks causing the SET FOREIGN_KEY_CHECKS=0 to
 not work but still return success?  Seems like a mysql bug then right?

 
 John A. McCaskey
 
 -Original Message-
 From: John McCaskey 
 Sent: Monday, July 11, 2005 9:51 AM
 To: John McCaskey; mysql@lists.mysql.com
 Subject: RE: SET FOREIGN_KEY_CHECKS=0 being ignored
 
 Oh, I should also mention we have binary logging on and I verified by
 looking at the binary log that the commands are being excuted and
logged
 with an error code of 0.  So the obvious thought of the codes broken
and
 not running them is unfortunately not the problem.  It seems to be
some
 sort of legitimate mysql setup error on our part or a bug in mysql.
 
 John A. McCaskey
 
 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED] 
 Sent: Monday, July 11, 2005 9:46 AM
 To: mysql@lists.mysql.com
 Subject: SET FOREIGN_KEY_CHECKS=0 being ignored
 
 Hey,
  
 I have an application using the C API that is doing a REPLACE command
 into an innodb table that has other tables with cascading deletes
 relying on it's entries.  Rather than use an UPDATE/Check
 affected/Insert/Check success/repeat method we have wrapped the
REPLACE
 query in a SET FOREIGN_KEY_CHECKS=0; then after SET
 FOREIGN_KEY_CHECKS=1; pair of commands.  This is working great in our
 dev and test environments but its been discovered that on our
production
 servers it is apparently having no effect and the cascading deletes
are
 occurring anyway.
  
 So, the first thing I thought was 'something must be wrong with the
 permissions' but I've been unable to find any discrepancies and the
 manual doesn't seem to indicate you even need any special permissions
to
 execute the set command.  Has anyone else experienced anything
similar?
 Does anyone have any ideas what environmental differences could cause
 the SET FOREIGN_KEY_CHECKS command to be ignored?  I'm at my wits end
 here... any suggestions appreciated.
  
 John A. McCaskey
  
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Backing up live MySQL Databases

2005-06-16 Thread John McCaskey
If you use InnoDB then the InnoDB Hot-backup tool works very well.  It's not 
free, but its inexpensive and if you are in an environment where you need to do 
large fast hot backups you will probably find it well worthwhile.  

We normally do backups as follows:

Production Server A - Production Server B 
|
\/
Backup Server  

(The arrows above are replication)

Where in the above A and B replicate to each other and both are live for 
production use.  Then the backup server replicates off of A but is never used 
in production.  We then once a week do a full mysqldump of the backup and call 
that our full backup.  Nightly we rotate the binary logs for incremental 
backups on the backup server and store them with the full dump for the week.

Using that model you never need a 'hot backup' as you can take the backup 
server offline take a backup, then bring it back and let it catch up.  However, 
when you do need a hot-backup is in the event the backup server fails or the db 
corrupts.  Then we use InnoDB hot-backup to take a live snapshot off of 
Production Server A and restore Backup Server from it and restart replication 
consistently.  This model works very well and has been in use for more than a 
year on a 60gig+ database.  It does have the cost of requiring some extra 
hardware of course.

John A. McCaskey

-Original Message-
From: Glennie Vignarajah [mailto:[EMAIL PROTECTED] 
Sent: Thursday, June 16, 2005 10:50 AM
To: mysql@lists.mysql.com
Subject: Re: Backing up live MySQL Databases

Le Thursday 16 June 2005 17:06, Jeff McKeon(Jeff McKeon 
[EMAIL PROTECTED]) disait:

Hello,

 Anyone using any third party products like Arkeia that enable you
 to do a live backup of MySQL database?

 I've tried Arkeia few months ago. The online backup and restore of 
MySql databases worked well.


 I'm searching for a backup solution for our new data center and
 would like to here what other people are using for MySQL.

 Net Valut from bakbone (www.bakbone.com) makes also online backup 
(I've not tried MySql hot backup yet). There is a demo version so 
you can give try.
 With my little experience of both products, I can say that they are 
both quite user friendly and can handle Linux/Solaris/Windows 
clients (but Arkeia Server needs to installed on a Linux machine)... 
We chosed NetVault beacause it can backup our INFORMIX server (which 
can't be switch to Mysql yet :-( ) and LDAP servers online! 

-- 
Glennie
L'ambition est le dernier refuge de l'chec

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-10 Thread John McCaskey
On Tue, 2005-05-10 at 14:56 -0400, Frank Bax wrote:
 At 02:22 PM 5/10/05, Paul Halliday wrote:
 Now, as time progresses the queires are getting slower and slower.
 I know this is expected,
 
 
 I don't think so.  I thought that if the number of rows returned does not 
 change and an index is properly used, then query time should not change 
 significantly as size of database grows. 
 

True, for the appropriate definition of 'significantly'.  Also false,
for the appropriate definition of 'significantly'.  The index's are
trees which must be searched, this is fairly fast and the time doesn't
grow linearly or anything like that, but the time does of course grow
with more rows.  So if the number of rows increases greatly then a
noticable increase in the time to search the index may occur.

John

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does a multi-row INSERT work?

2005-03-31 Thread John McCaskey
INSERT INTO table (field1, field2) VALUES (1, 2), (3, 4), (5, 6), (7,
8);

That would insert 4 rows first row with field1=1, field2=2, second
field1=3, field2=4, etc.

This is documented on the INSERT Syntax page of the manual, but it may
be kind of hard to read for a beginner as it just says VALUES({expr |
DEFAULT},...),(...),...

On Thu, 2005-03-31 at 11:46 -0800, Chris W. Parker wrote:
 Hello,
 
 I searched the archives, looked through the manual, and searched google
 for info on how to actually perform a multi-row INSERT but didn't find
 an answer.
 
 Would someone please show me the syntax for this please?
 
 I could just do a loop and INSERT the data that way but according to the
 manual, a multi-row INSERT is faster.
 
 
 
 Thanks,
 Chris.
 
-- 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Url http://highperformancemysql.com/

2005-02-18 Thread John McCaskey
On Fri, 2005-02-18 at 08:08 -0800, Jason Martin wrote:
 On Fri, Feb 18, 2005 at 10:06:38AM +0100, Anton Kornexl wrote:
  There should be tools on this website, but i see only a message from 
  Infektion Group.
  What happened ?
 Looks like the website got hacked.
 

I guess they should have written a book on high security mysql instead?

John A. McCaskey

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: processes and threads question

2005-01-18 Thread John McCaskey
Did you upgrade to a 2.6.x linux kernel as well? Threads get hidden in a
normal ps aux command starting in 2.6 and show as a single process.  If
this is the case do ps aux -L and you will see the threads as well.

On Tue, 2005-01-18 at 13:15 -0500, Eben Goodman wrote:
 I used to run mysql 3.x on Redhat, and would on any given day have 
 anywhere from 5 to 30 mysqld process/threads going depending on 
 traffic.  I upgraded to mysql 4.0.17 on a dedicated redhat box, and now 
 it only ever shows 1 mysqld process/thread that uses max cpu and 
 consumes a good deal of memory.  This is all fine, but I'd like to 
 understand what has changed and why I'm not seeing multiple threads like 
 I used to under 3.x?
 
 Any insight is appreciated.
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



not all rows returned when using order by and null values?

2005-01-04 Thread John McCaskey
See below:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508 order by avg);
+-+
| avg |
+-+
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
13 rows in set (0.00 sec)

mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
+-+
| avg |
+-+
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
9 rows in set (0.00 sec)

mysql 

As you can see, I have 13 rows that match the query, if I put the order
by in the brackets it works fine.  But when moved outside of them it
fails.

The reason I have the brackets is this is a dynamically generated query
and may span tables ie:

(SELECT avg FROM event_log_5minute_20050104 WHERE monitor_id=479139 AND
monitor_server_id=1 AND timestamp = 2005010408 AND timestamp 
2005010508) UNION ALL (SELECT avg FROM event_log_5minute_20050105
WHERE monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;

Am I missing something and this is expected behavior and not a bug? I'm
using 4.0.18, I haven't checked the changelogs yet, maybe its been
fixed.  

-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: not all rows returned when using order by and null values?

2005-01-04 Thread John McCaskey
On Tue, 2005-01-04 at 17:43 -0500, Rhino wrote:
 - Original Message - 
 From: John McCaskey [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Tuesday, January 04, 2005 5:22 PM
 Subject: not all rows returned when using order by and null values?
 
 
  See below:
  mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
  monitor_id=479139 AND monitor_server_id=1 AND timestamp =
  2005010408 AND timestamp  2005010508 order by avg);
  +-+
  | avg |
  +-+
  |NULL |
  |NULL |
  |NULL |
  |NULL |
  |NULL |
  | 55854.1 |
  | 55854.1 |
  | 63566.8 |
  | 70157.6 |
  |  121185 |
  |  128803 |
  |  172269 |
  |  320097 |
  +-+
  13 rows in set (0.00 sec)
 
  mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
  monitor_id=479139 AND monitor_server_id=1 AND timestamp =
  2005010408 AND timestamp  2005010508) order by avg;
  +-+
  | avg |
  +-+
  |NULL |
  | 55854.1 |
  | 55854.1 |
  | 63566.8 |
  | 70157.6 |
  |  121185 |
  |  128803 |
  |  172269 |
  |  320097 |
  +-+
  9 rows in set (0.00 sec)
 
  mysql
 
  As you can see, I have 13 rows that match the query, if I put the order
  by in the brackets it works fine.  But when moved outside of them it
  fails.
 
  The reason I have the brackets is this is a dynamically generated query
  and may span tables ie:
 
  (SELECT avg FROM event_log_5minute_20050104 WHERE monitor_id=479139 AND
  monitor_server_id=1 AND timestamp = 2005010408 AND timestamp 
  2005010508) UNION ALL (SELECT avg FROM event_log_5minute_20050105
  WHERE monitor_id=479139 AND monitor_server_id=1 AND timestamp =
  2005010408 AND timestamp  2005010508) order by avg;
 
  Am I missing something and this is expected behavior and not a bug? I'm
  using 4.0.18, I haven't checked the changelogs yet, maybe its been
  fixed.
 
 Wow, that's just weird if you ask me ;-) It's as if you added a 'DISTINCT'
 to the SELECT when move the ORDER BY outside of the brackets - except for
 the fact that it returns BOTH of 55854.1 values.
 
 Sorry, I don't have any idea what is going on there or if it is a bug that
 has been fixed already. ;-)
 

Yeah, thats exactly what it is like, and its causing my computations to
be corrupted.  I haven't been able to come up with a good workaround
that doesn't result in a significant slowdown in my application (these
queries are run several million times daily... using a limit x,1 to
select the 95th or 99th percentile value, and then store it in another
table).

If anyone else has ideas please comment!  Thanks.

 Rhino
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: not all rows returned when using order by and null values?

2005-01-04 Thread John McCaskey
FYI, I have now verified this bug (?) occurs in 4.0.23 as well.

On Tue, 2005-01-04 at 15:03 -0800, John McCaskey wrote:
 On Tue, 2005-01-04 at 17:43 -0500, Rhino wrote:
  - Original Message - 
  From: John McCaskey [EMAIL PROTECTED]
  To: mysql@lists.mysql.com
  Sent: Tuesday, January 04, 2005 5:22 PM
  Subject: not all rows returned when using order by and null values?
  
  
   See below:
   mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
   monitor_id=479139 AND monitor_server_id=1 AND timestamp =
   2005010408 AND timestamp  2005010508 order by avg);
   +-+
   | avg |
   +-+
   |NULL |
   |NULL |
   |NULL |
   |NULL |
   |NULL |
   | 55854.1 |
   | 55854.1 |
   | 63566.8 |
   | 70157.6 |
   |  121185 |
   |  128803 |
   |  172269 |
   |  320097 |
   +-+
   13 rows in set (0.00 sec)
  
   mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
   monitor_id=479139 AND monitor_server_id=1 AND timestamp =
   2005010408 AND timestamp  2005010508) order by avg;
   +-+
   | avg |
   +-+
   |NULL |
   | 55854.1 |
   | 55854.1 |
   | 63566.8 |
   | 70157.6 |
   |  121185 |
   |  128803 |
   |  172269 |
   |  320097 |
   +-+
   9 rows in set (0.00 sec)
  
   mysql
  
   As you can see, I have 13 rows that match the query, if I put the order
   by in the brackets it works fine.  But when moved outside of them it
   fails.
  
   The reason I have the brackets is this is a dynamically generated query
   and may span tables ie:
  
   (SELECT avg FROM event_log_5minute_20050104 WHERE monitor_id=479139 AND
   monitor_server_id=1 AND timestamp = 2005010408 AND timestamp 
   2005010508) UNION ALL (SELECT avg FROM event_log_5minute_20050105
   WHERE monitor_id=479139 AND monitor_server_id=1 AND timestamp =
   2005010408 AND timestamp  2005010508) order by avg;
  
   Am I missing something and this is expected behavior and not a bug? I'm
   using 4.0.18, I haven't checked the changelogs yet, maybe its been
   fixed.
  
  Wow, that's just weird if you ask me ;-) It's as if you added a 'DISTINCT'
  to the SELECT when move the ORDER BY outside of the brackets - except for
  the fact that it returns BOTH of 55854.1 values.
  
  Sorry, I don't have any idea what is going on there or if it is a bug that
  has been fixed already. ;-)
  
 
 Yeah, thats exactly what it is like, and its causing my computations to
 be corrupted.  I haven't been able to come up with a good workaround
 that doesn't result in a significant slowdown in my application (these
 queries are run several million times daily... using a limit x,1 to
 select the 95th or 99th percentile value, and then store it in another
 table).
 
 If anyone else has ideas please comment!  Thanks.
 
  Rhino
  
 -- 
 John A. McCaskey
 Software Development Engineer
 Klir Technologies, Inc.
 [EMAIL PROTECTED]
 206.902.2027
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: not all rows returned when using order by and null values?

2005-01-04 Thread John McCaskey
Even more interesting...:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
+-+
| avg |
+-+
|NULL |
|   43282 |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  124671 |
|  128803 |
|  172269 |
|  200622 |
|  211235 |
|  273802 |
|  320097 |
|  324012 |
|  422318 |
|  465117 |
|  543541 |
|  578753 |
|  606568 |
|  634915 |
|  698947 |
|  746309 |
|  882563 |
| 1.18043e+06 |
| 1.50719e+06 |
| 1.54081e+06 |
| 1.60248e+06 |
| 1.68207e+06 |
+-+
29 rows in set (0.00 sec)

mysql 

The above returns only 1 of 5 nulls actually in my table... but if I do:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) UNION ALL (SELECT NULL)
order by avg;
+-+
| avg |
+-+
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
|   43282 |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  124671 |
|  128803 |
|  172269 |
|  200622 |
|  211235 |
|  273802 |
|  320097 |
|  324012 |
|  422318 |
|  465117 |
|  543541 |
|  578753 |
|  606568 |
|  634915 |
|  698947 |
|  746309 |
|  882563 |
| 1.18043e+06 |
| 1.50719e+06 |
| 1.54081e+06 |
| 1.60248e+06 |
| 1.68207e+06 |
+-+
34 rows in set (0.00 sec)

Now I get all 5, plus the one I added with the union... so it appears to
only affect queries that have () like a union, but do not actually have
a union?


On Tue, 2005-01-04 at 15:40 -0800, John McCaskey wrote:
 FYI, I have now verified this bug (?) occurs in 4.0.23 as well.
 
 On Tue, 2005-01-04 at 15:03 -0800, John McCaskey wrote:
  On Tue, 2005-01-04 at 17:43 -0500, Rhino wrote:
   - Original Message - 
   From: John McCaskey [EMAIL PROTECTED]
   To: mysql@lists.mysql.com
   Sent: Tuesday, January 04, 2005 5:22 PM
   Subject: not all rows returned when using order by and null values?
   
   
See below:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508 order by avg);
+-+
| avg |
+-+
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
13 rows in set (0.00 sec)
   
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
+-+
| avg |
+-+
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
9 rows in set (0.00 sec)
   
mysql
   
As you can see, I have 13 rows that match the query, if I put the order
by in the brackets it works fine.  But when moved outside of them it
fails.
   
The reason I have the brackets is this is a dynamically generated query
and may span tables ie:
   
(SELECT avg FROM event_log_5minute_20050104 WHERE monitor_id=479139 AND
monitor_server_id=1 AND timestamp = 2005010408 AND timestamp 
2005010508) UNION ALL (SELECT avg FROM event_log_5minute_20050105
WHERE monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
   
Am I missing something and this is expected behavior and not a bug? I'm
using 4.0.18, I haven't checked the changelogs yet, maybe its been
fixed.
   
   Wow, that's just weird if you ask me ;-) It's as if you added a 'DISTINCT'
   to the SELECT when move the ORDER BY outside of the brackets - except for
   the fact that it returns BOTH of 55854.1 values.
   
   Sorry, I don't have any idea what is going on there or if it is a bug that
   has been fixed already. ;-)
   
  
  Yeah, thats exactly what it is like, and its causing my computations to
  be corrupted.  I haven't been able to come up with a good workaround
  that doesn't result in a significant slowdown in my application (these
  queries are run several million times daily... using a limit x,1 to
  select the 95th or 99th percentile value, and then store it in another
  table).
  
  If anyone else has ideas please comment!  Thanks.
  
   Rhino
   
  -- 
  John A. McCaskey
  Software Development Engineer
  Klir Technologies, Inc.
  [EMAIL PROTECTED]
  206.902.2027
  
 -- 
 John A. McCaskey
 Software Development Engineer
 Klir Technologies, Inc.
 [EMAIL

RE: not all rows returned when using order by and null values?

2005-01-04 Thread John McCaskey
FYI, I created a bug for this (http://bugs.mysql.com/bug.php?id=7672) which has 
now been updated to verified.  So it looks like it is in fact a mysql bug.



From: John McCaskey [mailto:[EMAIL PROTECTED]
Sent: Tue 1/4/2005 3:59 PM
Cc: mysql@lists.mysql.com
Subject: Re: not all rows returned when using order by and null values?



Even more interesting...:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
+-+
| avg |
+-+
|NULL |
|   43282 |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  124671 |
|  128803 |
|  172269 |
|  200622 |
|  211235 |
|  273802 |
|  320097 |
|  324012 |
|  422318 |
|  465117 |
|  543541 |
|  578753 |
|  606568 |
|  634915 |
|  698947 |
|  746309 |
|  882563 |
| 1.18043e+06 |
| 1.50719e+06 |
| 1.54081e+06 |
| 1.60248e+06 |
| 1.68207e+06 |
+-+
29 rows in set (0.00 sec)

mysql

The above returns only 1 of 5 nulls actually in my table... but if I do:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) UNION ALL (SELECT NULL)
order by avg;
+-+
| avg |
+-+
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
|   43282 |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  124671 |
|  128803 |
|  172269 |
|  200622 |
|  211235 |
|  273802 |
|  320097 |
|  324012 |
|  422318 |
|  465117 |
|  543541 |
|  578753 |
|  606568 |
|  634915 |
|  698947 |
|  746309 |
|  882563 |
| 1.18043e+06 |
| 1.50719e+06 |
| 1.54081e+06 |
| 1.60248e+06 |
| 1.68207e+06 |
+-+
34 rows in set (0.00 sec)

Now I get all 5, plus the one I added with the union... so it appears to
only affect queries that have () like a union, but do not actually have
a union?


On Tue, 2005-01-04 at 15:40 -0800, John McCaskey wrote:
 FYI, I have now verified this bug (?) occurs in 4.0.23 as well.

 On Tue, 2005-01-04 at 15:03 -0800, John McCaskey wrote:
  On Tue, 2005-01-04 at 17:43 -0500, Rhino wrote:
   - Original Message -
   From: John McCaskey [EMAIL PROTECTED]
   To: mysql@lists.mysql.com
   Sent: Tuesday, January 04, 2005 5:22 PM
   Subject: not all rows returned when using order by and null values?
  
  
See below:
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508 order by avg);
+-+
| avg |
+-+
|NULL |
|NULL |
|NULL |
|NULL |
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
13 rows in set (0.00 sec)
   
mysql (SELECT avg FROM event_log_5minute_20050104 WHERE
monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
+-+
| avg |
+-+
|NULL |
| 55854.1 |
| 55854.1 |
| 63566.8 |
| 70157.6 |
|  121185 |
|  128803 |
|  172269 |
|  320097 |
+-+
9 rows in set (0.00 sec)
   
mysql
   
As you can see, I have 13 rows that match the query, if I put the order
by in the brackets it works fine.  But when moved outside of them it
fails.
   
The reason I have the brackets is this is a dynamically generated query
and may span tables ie:
   
(SELECT avg FROM event_log_5minute_20050104 WHERE monitor_id=479139 AND
monitor_server_id=1 AND timestamp = 2005010408 AND timestamp 
2005010508) UNION ALL (SELECT avg FROM event_log_5minute_20050105
WHERE monitor_id=479139 AND monitor_server_id=1 AND timestamp =
2005010408 AND timestamp  2005010508) order by avg;
   
Am I missing something and this is expected behavior and not a bug? I'm
using 4.0.18, I haven't checked the changelogs yet, maybe its been
fixed.
   
   Wow, that's just weird if you ask me ;-) It's as if you added a 'DISTINCT'
   to the SELECT when move the ORDER BY outside of the brackets - except for
   the fact that it returns BOTH of 55854.1 values.
  
   Sorry, I don't have any idea what is going on there or if it is a bug that
   has been fixed already. ;-)
  
 
  Yeah, thats exactly what it is like, and its causing my computations to
  be corrupted.  I haven't been able to come up with a good workaround
  that doesn't result in a significant slowdown in my application (these
  queries are run several million times daily... using a limit x,1 to
  select

Re: Relative efficiency (in terms of disk io) between REPLACE and UPDATE w/InnoDB

2004-12-16 Thread John McCaskey
Ahhh, thats very good to know.  Thank you.

On Wed, 2004-12-15 at 19:09 -0500, Harrison Fisk wrote:
 No.
 
 In InnoDB an UPDATE is done as a DELETE/INSERT internally because it
 is 
 multiversioning and it has to be able to rollback in case of a
 problem. 
   So the UPDATE effectively does the same thing as the REPLACE that
 hits 
 a problem.   I would most likely stick with REPLACE since it is a bit 
 easier to understand how it is working and has less client code.
 
 Regards,
 
 Harrison
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Relative efficiency (in terms of disk io) between REPLACE and UPDATE w/InnoDB

2004-12-15 Thread John McCaskey
I'm currently doing a large number of REPLACE queries, I know that these
evaluate as if doing a DELETE/INSERT pair, and I'm wondering if this is
true on a disk io level as well with extra io occuring for the delete,
and then re-insertion, vs what would occur with an UPDATE.

The way it works roughly each row gets updated around 12-24 times, the
updated do not affect the primary key, or any of the other keys for that
matter.  The table is INNODB.

So I'm thinking if the row is deleted then re-inserted, there is the
overhead of one finding the old row and marking it deleted, two
searching for the correct pos for the row in the table and in the key
structures, and three writing the row to disk.

Vs, and UPDATE would have the overhead of one finding the old row, and
two writing the updates to disk.

As such it seems like it actually be faster for me to attempt an UPDATE,
and if it fails, then do an INSERT, rather than using REPLACE?

Is this correct?


-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Relative efficiency (in terms of disk io) between REPLACE and UPDATE w/InnoDB

2004-12-15 Thread John McCaskey
On Wed, 2004-12-15 at 11:46 -0600, gerald_clark wrote:
 
 John McCaskey wrote:
 
 I'm currently doing a large number of REPLACE queries, I know that these
 evaluate as if doing a DELETE/INSERT pair, and I'm wondering if this is
 true on a disk io level as well with extra io occuring for the delete,
 and then re-insertion, vs what would occur with an UPDATE.
 
 The way it works roughly each row gets updated around 12-24 times, the
 updated do not affect the primary key, or any of the other keys for that
 matter.  The table is INNODB.
 
 So I'm thinking if the row is deleted then re-inserted, there is the
 overhead of one finding the old row and marking it deleted, two
 searching for the correct pos for the row in the table and in the key
 structures, and three writing the row to disk.
 
 Vs, and UPDATE would have the overhead of one finding the old row, and
 two writing the updates to disk.
 
 As such it seems like it actually be faster for me to attempt an UPDATE,
 and if it fails, then do an INSERT, rather than using REPLACE?
 
 This provides a window for errors.  You can attempt the update, and 
 before the insert, another process
 could insert the record.
 And insert followed by an update on failure closes this window.
 
Thats true, and thanks for pointing it out.  

However, in my situation, I am not concerned with this.  The tables in
question are basically data logging tables, which are updated by a
single process which will always serial UPDATE, check if any rows were
matched (I'll use the CLIENT_FOUND_ROWS option when connecting), and
then INSERT if not.

Furthermore, the window for errors could be closed in any situation by
fist UPDATE, check if any rows were matched, then REPLACE (rather than
insert).

This would make the situation where a row did not already exist actually
more expensive than before, but because that situation occurs only about
1/24 times for me, it is still much faster overall adding the UPDATE I
think.

This is assuming that as I stated above the REPLACE query works as
described and is truly just a DELETE/INSERT and is not optimized itself
to avoid some of the overhead when a row already exists.

 
 Is this correct?
 
 
   
 
 
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Clustering and a large database

2004-12-06 Thread John McCaskey
Yes, mysql clustering is a ram only database.  It does not make sense to
use it if you have a very large database.

You can use master/slave functionality and use whatever table type you
like.  But using the newer clustering technology you have no choice but
to use the ndb table type which is ram only.

On Mon, 2004-12-06 at 16:53 -0500, Joshua Beall wrote:
 Klaus Berkling [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
  I have started to import our data.  I gather from the manual that tables 
  are stored in RAM.  I am trying to import a database with 11 tables with 
  about 7 million rows.  If I follow the math in the manual, one row will 
  use 32KB, I would need 224 TB of RAM.
 
 What part of the manual leads you to believe that the entire contents of 
 every table needs to be stored in RAM?  I was not aware of this requirement. 
 
 
 
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question on date calculation +

2004-11-17 Thread John McCaskey
You can't automatically update it, that would require triggers which are
not supported in mysql, you would need some sort of script that runs
once a day and manually uses the functions described in the linke
Bernard sent you to update the field.

However I would recommend a different table structure, 2 fields:
startDate,
endDate

Then nothing needs to be updated.  When you check in your script as to
whether the user has time left just select where endDate  NOW().  You
can compute the endDate easily when doing your insert as
DATE_ADD(startDate, INTERVAL LenChoise DAYS), so you don't need to
modify anything as far as how you present the choice to the user.

On Wed, 2004-11-17 at 13:58 -0800, Stuart Felenstein wrote:
 --- Bernard Clement [EMAIL PROTECTED]
 wrote:
 
  
  You will get your answers by reading carefully the
  Date Time Functions
  in the MySQL Reference Manual.
  
 This will tell me how to automatically update the
 column in question ?  
 
 Stuart
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie question about web users

2004-11-04 Thread John McCaskey
The answer is you create one user for the PHP (webserver) process to
use.

Then you keep track of user permissions in your application code.  This
probably involves creating your own user table within your own database
and storing users of your site there.  Then in the other tables you
associate items with users, like in your fav_book table one of the
fields would be user_id, and would allow you to figure out which user
the favorite entry belonged to.

You do not create mysql users to represent application users however as
the users don't access the database, the webserver does.

On Thu, 2004-11-04 at 20:53 +, Matthew Benton wrote:
 Hello all,
  
 Basic question about creating a database that will be accessed by many users. 
 Do I create one web user account with no password in the mysql.user table or 
 add each user to the mysql.user table as they join up to my site? (Lots of 
 books and documentation says how to create users, the privelege system etc, 
 but none seem to address this basic structural question - presumably because 
 it's obvious to anyone who already knows). I'm planning a mysql and PHP site.
  
 Taking a mundane example - lets say a second hand bookstore has a site which 
 lets users search through a list of books, and lets subscribed users select 
 favorite books, 0-n number of books, in a table fav_books. Each fav_book 
 tuple is then related to at least one user.  When the favourite book comes 
 into stock the store sends a message to the user to let them know it's 
 available. (I said it was a mundane example!).
  
 If the bookstore created a default user then I suppose anyone can alter 
 anyone elses data in the fav_book table (undesirable) ? Would they create a 
 table of authenticated_users for example instead of putting them in the 
 mysql.user table, then use PHP to retain a record of which user is logged in 
 at that time and so which tuples in the fav_book table they can change? 
 Alternatively if each user is listed in mysql.user table they would all get 
 the same priveleges and the bookstore would have to create a seperate 
 fav_book table for each user to contain just their data. With as many tables 
 as users it would be surely impossible to run background processes which 
 check the new stock against books people are looking for?
  
 As I say it's a basic question about web use of a database - hope the answer 
 isn't too obvious.
  
 Thanks for any help
 matt
 
   
 -
  ALL-NEW Yahoo! Messenger - all new features - even more fun!  
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: password guessing attacks against mysql

2004-10-30 Thread John McCaskey
No, it is not true.

After repeated failed connection attempts from a host that host will be blocked until 
a flush hosts command is executed.  The number allowed before this blocking is 
specified by the variable max_connect_errors.

See http://dev.mysql.com/doc/mysql/en/Blocked_host.html

John



From: Dave Dyer [mailto:[EMAIL PROTECTED]
Sent: Sat 10/30/2004 2:01 PM
To: [EMAIL PROTECTED]
Subject: password guessing attacks against mysql




While discussing the hazards of having an open mysql port, it occurred
to me that I have never seen any mention of defenses against password
guessing attacks, such as slow response to failed authentication, or
shutting of a particular remote IP that seems to be issuing unsucessful
requests.

Is it true that there are none?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





Re: Ignore a single query in replication

2004-10-21 Thread John McCaskey
So I gather you are creating a table, and doing some work in it, but
even though it isn't declared 'temporary' it really is and you don't
want it replicated?

If this is the case you can create the table in a separate database, and
in your mysql configuration tell the binary logging to exclude that
database.  Then anything in that specific database won't get replicated,
I believe you can only do this exclusion on the database level, not per
table.  But you can perform all your queries across databases just fine.


On Thu, 2004-10-21 at 11:24 -0700, Gary Richardson wrote:
 Hey,
 
 Is there a way to tell the slave to not execute a query without
 ignoring tables or databases?
 
 There are a bunch of queries that happen on the master for statistical
 purposes that don't use temp tables and generate large amounts of
 data. These queries don't need to run on the slaves and in fact slow
 it down quite a bit.
 
 I've tried hunting around the online docs, but I can't seem to find
 anything. For some reason I thought there was some sort of comment
 that I could put infront of my query to accomplish this.
 
 Thanks.
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Java.lang.outOfMemoryError on large ResultSet

2004-10-20 Thread John McCaskey
The problem is not the memory on the database server, but on the client
running the JDBC connection.  Seems that when you retrieve the result it
is trying to store everything in memory at once.

Here is some info from the mysql docs that looks like it pertains, see
http://dev.mysql.com/doc/connector/j/en/index.html for more.


By default, ResultSets are completely retrieved and stored in memory.
In most cases this is the most efficient way to operate, and due to the
design of the MySQL network protocol is easier to implement. If you are
working with ResultSets that have a large number of rows or large
values, and can not allocate heap space in your JVM for the memory
required, you can tell the driver to 'stream' the results back one row
at-a-time.

To enable this functionality, you need to create a Statement instance in
the following manner: 

stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
  java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);

The combination of a forward-only, read-only result set, with a fetch
size of Integer.MIN_VALUE serves as a signal to the driver to stream
result sets row-by-row. After this any result sets created with the
statement will be retrieved row-by-row.

There are some caveats with this approach. You will have to read all of
the rows in the result set (or close it) before you can issue any other
queries on the connection, or an exception will be thrown. Also, any
tables referenced by the query that created the streaming result will be
locked until all of the results have been read or the connection
closed.

John McCaskey

On Wed, 2004-10-20 at 14:20 -0500, [EMAIL PROTECTED] wrote:
 MySQL 5.0 Alpha
 Jdbc Driver: Connector J
 OS: Windows 2000 Professional
 
 Table Size 1 Mil. Records.
 
 Table Structure:
 Table1
 logid - autoincrement int(11)
 Timestamp - timestamp
 Timestamp2 - timestamp
 Tag - varchar(15)
 
 
 Query/Code:
 ---
 PreparedStatement pst = con.prepareStatement(Select
 logid,timestamp,Timestamp2,Tag from Table1);
 Resultset rs   = pst.executeQuery(); //getting
 java.lang.outOfMemoryError here, even before retrieving the resultset
 below.
 While (rs.next())
 {
   //check something, do something..
 }
 
 ... Tried the same thing above doing it 3 times, with limits of 400k at
 a time. The first loop goes through fine.
 2nd loop which has a limit of 360k-760k again gives out of memory
 exception.
 
 
 What shall I do?
 I have to iterate through all the Mil. Records..
 (This is a once-a-week operation).
 
 I have increased Max_Allowed_Packet to 200M , and it still gives the
 same errors, so I know that's not the problem here.
 
 Please advice.
 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
[EMAIL PROTECTED]
206.902.2027

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my_thread_init

2004-10-13 Thread John McCaskey
I believe that what you described is perfectly acceptable.  The thing to
keep in mind is the thread_init allocates thread specific memory for
mysql, and the thread_end clears it.  As such you should never execute
any other mysql commands unless you have executed an init, and you
should never init more than once without first ending, but assuming you
get them all matched up right it is alright to have multiple init/end
pairs within a thread.

I do this in my own code, which has worked well for sometime now, so
while the manual doesn't make it completely clear this is ok, I'm pretty
sure it is.

One note, I see you write my_thread_init/my_thread_end in your message,
the correct names are mysql_thread_init and mysql_thread_end, the
my_init function should be called once per process, not in each thread.

John

On Wed, 2004-10-13 at 10:39 +0100, Philippe Poelvoorde wrote:
 Hi,
 
 I'm using the C api within a multithread environement. Is that allright 
 if I have a function that looks like this :
 saveParam(){
   my_thread_init();
   [connexion/query/close]
   my_thread_end();
 }
 
 Can I call it several time from the same thread ? Or do I have to do 
 my_thread_init/end only once for each thread ?
 
 -- 
 Philippe Poelvoorde
 COS Trading Ltd.
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Some basic and advanced replication questions

2004-10-13 Thread John McCaskey
(a) You are mistaken.  You can easily setup both as masters and then as
slaves to each other.  Simply enable the neccesary binary logging on
both sides, then use the CHANGE MASTER TO command to set each to point
at the other.  Now when you do an insert or delete on either it gets
replicated to the other.

(b) Its already in 4.x as described above, so yes it will be in 5.0 as
well.

(c) Clustering, is not supported, but you can setup a type of ring
network with all updates getting sent from master-slave all the way
around the ring so that all servers 2-n get all updates/deletes/inserts.
The issue is detecting when a node goes down, and routing around it is a
manual process.

(d) Yes, see above comments.  Just set them all to do binary logging, if
you want to have a ring rather than a dual master pair you should set
the log-slave-updates option so that each server logs not only its own
updates, but those it recieves from its master.  

(e) If you had just a one way master-slave relationship and you delete
data on the slave, that is very bad.  The slave is now not the same as
the master, the record will not be re-inserted, if you go to update the
record on the master, when the command gets replicated to the slave an
error will generat and replication will halt awaiting you to manually
fix it.  However, if you have setup a master/slave pair where
replication goes both ways as described in (a) or a ring as described in
(c) then you have no issue as all servers will be kept consistent.

John McCaskey

On Tue, 2004-10-12 at 21:19 +0200, Frank Fischer wrote:
 Hi
  
 i'm using MySQL version 4.0.20d.
 I was able to set up a simple replication between a master and a slave.
  
 To fully understand the replication mechanism of MySQL i would like to ask
 some questions (the manual does not answer all my questions):
  
 (a) in 4.0.20d there is no way for a kind of Master-Master replication,
 means, that both databases replicate to each other, right? 
 (b) Will there be such a feature in Version 5 (and does anyone know when it
 is planned to declare version 5 as stable and productive)?
 (c) Or is clustering a kind of Master-Master replication?
 (d)Is it possible to have a MySQL database server being Slave of a Master A
 and serving as Master for a Slave B at the same time and for the same
 database? 
 (e) When i have a Master-Slave replication, what happens when i delete a
 replicated record on the Slave - will the record than be replicated again or
 will the record remain deleted?
  
 So far my questions and i'm aware that some of them sound quiet wired, so i
 will try do explain what i'm trying to do:
  
 I have an application that runs on a server using a MySQL database that is
 also installed on that server. Besides i have the same server with the same
 configuration as a hot standby in case of something bad happens to my first
 (productive) server. Let's refer to them as Server A (active) and B (hot
 standby).
 Both servers have a public Network interface and a private Network
 interface. On the public side, they have the SAME IP Address. To avoid
 collisions, the are connected to a managed switch where only the port to
 Server A is active and is switch to port to Server B in case of emergency.
 On the private Interface both servers carry a own, unique private IP
 Address.
 Since Server B acts as hot standby, it's database should always be up to
 date, so i planned to use replication between Server A and B. A would act as
 Master, B as Slave. So far, so good.
 Now there is a third MySQL server (let's call it C) at a different location
 (interconnection by a VPN over the internet). This server needs the data
 from a table of my database in almost-realtime. So i thought, replication
 would be a good mechanism to do that. Now there are some problems:
 (a) I know how to setup a replication A Master, C Salve, but what happens if
 A goes down and we have to switch to the hot standby B? B is configured as
 Slave in relation to A, so C would not be able to get Data from B. Is there
 a way to setup a Master-Slave relation from B to C so C could replicate the
 same data from B as it did before from A?
 (b) The Application on Server C is going to delete records that have been
 replicated. What happens then, will C replicate the deleted records again or
 will they remain deleted on C? On A or B they should never be deleted,
 regardless of what is done on C?
 (c) After an case of emergency when i want to bring A up again and let it
 server as active Server again, how would i be able to replicate the
 records that have been created on B during the absence of A? At the moment,
 i see no other way as to do this manually, as long as there is nothing like
 Master-Master replication. Any other ideas?
  
 Maybe i'm on a totally wrong track, maybe all some of you cracks out there
 have a better idea how to solve this? Would clustering be such a solution?
 Are there any other kind of synchronisation/replication tools that would
 allow me

Re: AW: InnoDB and foreign keys

2004-10-13 Thread John McCaskey
If you could post your table schema (SHOW CREATE TABLE table_name) and
then give an example of the query that is slow on InnoDB that would help
us give a better analysis.  Right now it sounds like something is wrong,
InnoDB is likely to be slightly slower than MyISAM because of
transaction overhead and row level locking for simple selects.  But, it
should not be 'very very slow' if MyISAM is 'very very fast' as it
sounds like you are saying.

John


On Wed, 2004-10-13 at 18:29 +0200, Ulrich Seppi wrote:
 HELLO
 
  does anybody know if Foreign keys increase the performance of select
 querys?
  example.
  DB1 has only INNODB tables.
  DB2 has the same structure as DB1 with all possible foreign keys.
 
  is the same query faster on DB1 or DB2?
 
 
 Why should it increase performance?
 
 At the moment I have the hole database with MyIsam tables but there is very
 much data on it.
 I have more tables with over 500.000 record and over 100 MB but until now
 all queries are fast.
 The problem is that sometime happens that tables are corrupt and I have to
 REPAIR they.
 (mysql 4.1.5).
 All operations on the corrupt tables until REPAIR will fail and this is a
 big problem because
 more hundred people are working on the database at the same time and then
 much data will be lost.
 
 Now, I tried on a testdatabase to convert the tables to InnoDB to be more
 stable and to have
 transactions. The result is that all operations (select queries) are verry
 verry slow.
 Not as fast as on MyISAM.
 
 How could I increase the performance of my InnoDB, now? I hoped that it
 could be done with
 foreign keys but it not seems so...
 
 does anybody have other ideas for solving the problem?
 
 
 thanks...
 Uli
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: speed issue - inserts slowing down selects

2004-10-13 Thread John McCaskey
One thing you could do, which may not be the best, is insert one (or
some set limit) of rows at a time, then after each sleep for .25 seconds
or something, so that your inserts get spread out more over time, and
there is idle time between them for the selecting clients to complete.
Obviously this means the total number of inserts you can support is
lower, but it sounds like that may be ok, and you just don't want them
consuming all resources for short period when they could spread out and
still let client requests be fast.

I'm not sure why insert delayed didn't work, the manual clearly says it
should, I'm going to try it hout shortly and see if it works for me.

John

On Wed, 2004-10-13 at 12:02 -0700, Daniel Cummings wrote:
 We have what is called a data grabber for quotes that is inserting rows into
 both InnoDb and MyIsam tables.
 
 When the data is being imported ( inserted ), it is dramatically slowing
 clients down that are running queries against the table.
 
  
 
 Our first attempt at a solution was to insert one row at a time.  This
 doesn't seem to make a difference.
 
  
 
 The next thing we tried is the Insert Delayed which as I stated in an
 earlier post, doesn't seem to work for InnoDb tables.
 
  
 
 We are also looking at Insert Low_Priority.
 
  
 
 Does anyone have any suggestions?
 
  
 
 TIA
 
  
 
 Dan
 


Re: Re[2]: Diffrences in table types

2004-10-11 Thread John McCaskey
As far as I know memory usage between the two table types is roughly the
same.  The way memory is setup/used is somewhat different however.  For
myisam the primary memoy buffer to accelerate queries is the key_buffer
which caches data for keys.  In innodb you have more options to set with
the main one being the innodb_buffer_pool_size which is used for caching
keys and data, you want to set this as large as possible.  You also have
several other adjustable buffers inlcuing an 'additonal_mem_pool' which
I'm not quite sure what it is used for, and the log_buffer which is used
for transaction related memory I believe.

So, if you are going to be using both MyISAM and InnoDB you will need
seperate buffers, which will of course increase total memory usage, or
leave a smaller size for both.  But if you switch completely to InnoDB
you can drop the MyISAM buffers down to almost nothing (still need them
as the mysql table with user data etc uses them, but say 8megs would be
plenty).  

John

On Sun, 2004-10-10 at 10:51 +0200, Jacques Jocelyn wrote:
 
 
 Hello John,
 
 Interesting post, quite useful,
 Question about performance with InnoDB ?
 say  you  have  a hosting server with 256 Mb of ram, would you know if
 that  will  make  a difference if the major database is converted from
 MyIsam to InnoDb ?
 
 Although,  InnoDB  is not a requirement, just luxury, but I would love
 to enjoy foreign keys and transactions
 
 Please advise,
 Thanks
 
 
 Sunday, October 10, 2004, 8:39:15 AM, you wrote:
 
 JM I meant 'No transaction support', which is you can't use
 JM begin work; ... ; commit; etc to perform transactions, each query
 JM takes effect immeiately and is visible to all other
 JM threads/clients immediately.
 ...
  
 JM Concurrency refers to multiple seperate connections (threads)
 JM trying to read/write to/from the same table at the same time. 
 JM Imagine you have 100 different connections to the database all
 JM trying to write to the same table.  With MyISAM each one will lock
 JM the entire table, and only one will execute at a time, making it
 JM very slow.  In InnoDB each one will only lock the rows it is
 JM modifying and they can all execute at once (if they are not
 JM modifying the same rows), and it will be very fast.
 
 
 
 
 Best regards,
 Jacques Jocelyn
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re[2]: Diffrences in table types

2004-10-11 Thread John McCaskey
Yes, if you use both table types within a single database then you will
have to split up the memory usage.  However, in many databases there are
just one or two tables that use 90% of the disk/memory space.  If this
is your situation then you just allocate most of the memory for the
table type these tables use (assumign they use the same type), and you
won't have any performance problem because the others don't need much
memory.

However, if your data is evenly split and evenly accessed between the
two table types then splitting the memory may present some performance
degredation.  Of course the solution is buy more memory.

John

On Mon, 2004-10-11 at 09:49 -0700, Benjamin Arai wrote:
 Here is another question.  Can you achieve the same performance having
 to different kinds of databases as though you were only using one?  I
 am assuming that you are going to run into problems because you cannot
 set both types of databases to have a lot of memory allocated to them.
 Right?
 
 
 On Mon, 11 Oct 2004 09:23:18 -0700, John McCaskey [EMAIL PROTECTED] wrote:
  As far as I know memory usage between the two table types is roughly the
  same.  The way memory is setup/used is somewhat different however.  For
  myisam the primary memoy buffer to accelerate queries is the key_buffer
  which caches data for keys.  In innodb you have more options to set with
  the main one being the innodb_buffer_pool_size which is used for caching
  keys and data, you want to set this as large as possible.  You also have
  several other adjustable buffers inlcuing an 'additonal_mem_pool' which
  I'm not quite sure what it is used for, and the log_buffer which is used
  for transaction related memory I believe.
  
  So, if you are going to be using both MyISAM and InnoDB you will need
  seperate buffers, which will of course increase total memory usage, or
  leave a smaller size for both.  But if you switch completely to InnoDB
  you can drop the MyISAM buffers down to almost nothing (still need them
  as the mysql table with user data etc uses them, but say 8megs would be
  plenty).
  
  John
  
  On Sun, 2004-10-10 at 10:51 +0200, Jacques Jocelyn wrote:
  
  
   Hello John,
  
   Interesting post, quite useful,
   Question about performance with InnoDB ?
   say  you  have  a hosting server with 256 Mb of ram, would you know if
   that  will  make  a difference if the major database is converted from
   MyIsam to InnoDb ?
  
   Although,  InnoDB  is not a requirement, just luxury, but I would love
   to enjoy foreign keys and transactions
  
   Please advise,
   Thanks
  
  
   Sunday, October 10, 2004, 8:39:15 AM, you wrote:
  
   JM I meant 'No transaction support', which is you can't use
   JM begin work; ... ; commit; etc to perform transactions, each query
   JM takes effect immeiately and is visible to all other
   JM threads/clients immediately.
   ...
  
   JM Concurrency refers to multiple seperate connections (threads)
   JM trying to read/write to/from the same table at the same time.
   JM Imagine you have 100 different connections to the database all
   JM trying to write to the same table.  With MyISAM each one will lock
   JM the entire table, and only one will execute at a time, making it
   JM very slow.  In InnoDB each one will only lock the rows it is
   JM modifying and they can all execute at once (if they are not
   JM modifying the same rows), and it will be very fast.
  
  
  
  
   Best regards,
   Jacques Jocelyn
  
  
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
 


Re: alias not allowed in WHERE clause?

2004-10-11 Thread John McCaskey
You may use Alias's if you use HAVING instead of WHERE this is one of
the defined difrerences between the two clauses.  Having is also slower
and will not be optimized, but if you are placing a complex function
like this in your where you obviously aren't expecting great speed.

John

On Mon, 2004-10-11 at 23:51 +0200, Morten Egan wrote:
 Well, it might not be SQL standard, but most databases out there allow 
 you to use the alias in your where clauses. It helps make the sql more 
 readable, and it shouldn't be that hard to add this feature to the 
 parser, so it can translate that alias back to the original row-source 
 selection, during parse time.
 
 /morten
 
 Paul DuBois wrote:
 
  At 11:32 -0700 10/11/04, Nathan Clark wrote:
 
  SELECT city, state, SQRT( POWER( (
  latitude - 39.039200
  ), 2 ) + POWER( (
  longitude + 95.689508
  ), 2 ) ) AS distance
  FROM geographics
  WHERE distance 1
  ORDER BY distance
  LIMIT 10;
 
  Returns:
  #1054 - Unknown column 'distance' in 'where clause'
 
  Are alias not allowed in WHERE clauses?
 
 
  How could they be?  The WHERE clause determines which rows to select.
  Aliases are defined for columns from the rows that have been selected.
 
 
  I am able to replace the alias with the entire math function, and it
  works as desired.  However, I do not like the heaviness/repetiveness
  of the query.
 
 
 
 
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Diffrences in table types

2004-10-09 Thread John McCaskey
It's pretty much just how you said it.  MyISAM tables (the default mysql table type) 
do not support foreign key constraints.  InnoDB tables do support them.
 
Here is a breakdown of the pros and cons of each as I see it (others please feel free 
to tell me I've got it all wrong :)).
 
MyISAM pros:
1)fast
2)default
3)smaller disk footprint
 
MyISAM cons:
1)Table level locking creates poor performance in high concurrency situations
2)No foreign key constraints
3)No transation support
 
InnoDB pros:
1)Transactions
2)Row level locking for good performance in high concurrency situations
3)Foreign Key Contstraints
 
InnoDB cons:
1)Higher disk footprint
2)Slightly slower in non high concurrency situations due to transaction overhead key 
constraint checking etc
 
I've had a very positive experience with using InnoDB tables in a production 
environment with a several gigabyte database.
 
John McCaskey



From: GH [mailto:[EMAIL PROTECTED]
Sent: Sat 10/9/2004 5:29 PM
To: [EMAIL PROTECTED]
Subject: Diffrences in table types



(SORRY FOR THE BLANK MESSAGE WRONG BUTTON PUSHED)

Greetings:

   I am trying to create referential intergrity...  I was reading that
mySql does not support that on the Database Level and that you have to
do it manually.


  However i am now seeing some posts that talk about it... i.e.
Foreign Keys and such... with INNODB

  can someone please fill me in

Thanks

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





Re: Question about using select...where f in (xxx)

2004-10-07 Thread John McCaskey
I second this.  The way you are storing multiple data items in a single
column is very bad database design and fails the test for even the first
normal form.  You are going to get very poor performance and usability
out of this table design.  I know you said you inherited the data so its
probably not your fault, but I believe you will save yourself
considerable headache by refactoring the table rather than working
around the poor design.

John McCaskey

On Thu, 2004-10-07 at 13:03 -0400, [EMAIL PROTECTED] wrote:
 I would strongly recommend refactoring as the string transformation you 
 mention cannot be accomplished without external scripting assistance.
 
 change your Bookings table to be (customer varchar(200), event_Id int(11))
 
 This way, each booking becomes one row in the table. Five bookings = five 
 rows. This change also allows you to add additional fields (like price, 
 date, contact info, etc) on the Bookings table so that you can track that 
 information about EACH booking. I have no idea how you are tracking any of 
 that the way you have the data now.
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
 Neal K [EMAIL PROTECTED] wrote on 10/07/2004 12:33:40 PM:
 
  I have a quick SQL question that I hope someone can answer,
  
  I have two tables
  Events { e_id int(11), e_name varchar(200) };
  Bookings { customer varchar(200), event_list varchar(200) };
Event_list is a string of comma separated values, eg  1,2,4   where 
 the
  number corresponds to Events.e_id
  
  Sample data:  Events { {1,'one'}, {2,'two'}, {3,'three'} }
Booking { {'john','1,3'} , {'jane','2,3'} }
  
  I would like some query that will return 2 rows
  
  John, 'one, three'
  Jane, 'two,three'
  
  
  Is this possible ?
  
  I inherited the schema and the data, so I would prefer not having to
  refactor anything, unless I have to.
  
  Thanks in Advance,
  Neal
  
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: INSERT IGNORE like feature for rows failing foreign key constraints?

2004-08-31 Thread John McCaskey
I never got a reply for this, and I'm still trying to figure out the
best way to handle it.  Anyone?

John A. McCaskey

-Original Message-
From: John McCaskey [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 25, 2004 2:17 PM
To: [EMAIL PROTECTED]
Subject: INSERT IGNORE like feature for rows failing foreign key
constraints?

I have a logging table where I insert a large number of rows every 5
minutes.  For performance reasons this occurs in bulk inserts of about
5000 rows at a time.  (ie. INSERT INTO table VALUES(...), (...), (...))

One of the fields in the table is an id that connects it to another
table.  It is possible that by the time the insert occurs (they queue up
in memory briefly before I create the bulk insert) a separate process
has deleted the entry in the parent table and the id is invalid.

When this happens right now the entire insert of 5000 rows fails because
one single row is bad.  I want the behavior to be that the one fails
silently and the other 4999 insert successfully.

Any ideas how I can do this?  It seems like INSERT IGNORE would make
sense but that appears to only ignore duplicates not foreign key
failures.

John A. McCaskey


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



INSERT IGNORE like feature for rows failing foreign key constraints?

2004-08-25 Thread John McCaskey
I have a logging table where I insert a large number of rows every 5
minutes.  For performance reasons this occurs in bulk inserts of about
5000 rows at a time.  (ie. INSERT INTO table VALUES(...), (...), (...))

 

One of the fields in the table is an id that connects it to another
table.  It is possible that by the time the insert occurs (they queue up
in memory briefly before I create the bulk insert) a separate process
has deleted the entry in the parent table and the id is invalid.

 

When this happens right now the entire insert of 5000 rows fails because
one single row is bad.  I want the behavior to be that the one fails
silently and the other 4999 insert successfully.

 

Any ideas how I can do this?  It seems like INSERT IGNORE would make
sense but that appears to only ignore duplicates not foreign key
failures.

 

John A. McCaskey

Software Development Engineer

IP Sciences, Inc.

[EMAIL PROTECTED]

206.902.2027

 



RE: can't log in mysql server

2004-08-02 Thread John McCaskey
Did your client side host change?  The user/pass are sometimes setup to
only allow access from a specific host subnet or single ip address.

John A. McCaskey


-Original Message-
From: Jean Zhong [mailto:[EMAIL PROTECTED] 
Sent: Monday, August 02, 2004 1:27 PM
To: [EMAIL PROTECTED]
Subject: can't log in mysql server


Hello everyone,

I met a strange problem.
I can't log in mysql server using my password
suddently.
I check the modified date of user.MYD  user.MYI in
/usr/local/mysql/data/mysql/ folder. Both of them were not modified
recently. That means I didn't change the password for mysql server.

Did anyone meet the same problem before? Does anyone
know how to deal with it? Thank you very much.

I still tried to reset the password following the
mysql manual:

 mysqld_safe --skip-grant-tables 
or
 safe_mysqld --skip-grant-tables 

But it gave me the following errors in the
localhost.err:

040802 16:08:02  mysqld started
Warning: One can only use the --user switch if running
as root
040802 16:08:03  InnoDB: Started
040802 16:08:03  /usr/local/mysql/bin/mysqld: Can't create/write to file
'/usr/local/mysql/data/localhost.pid' (Errcode: 13)
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.18-debug-debug'  socket:
'/tmp/mysql.sock'  port: 3306

And, when I tried to connect to the mysql server:
 mysql -u root

and update the password, use:
 UPDATE mysql.user SET Password=PASSWORD('newpwd')
-   WHERE User='root';

I got the following error:

ERROR 1017: Can't find file: './mysql/user.frm'
(errno: 13)


Does anyone have any idea about this?

I really appreciate your help and looking forward to
your reply.

Jean


 



__
Do you Yahoo!?
Yahoo! Mail Address AutoComplete - You start. We finish.
http://promotions.yahoo.com/new_mail 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Splitting data across tables

2004-07-21 Thread John McCaskey
I've had no problems partitioning data in this exact same manner.
However my timestamp column is always pre-computed in the application
code because it is neccesary to round it to the last 5 minute interval
so I would not encounter the issue you mention.  I'd recommend simply
computing the timestamp in advance.  If you have a database substantial
enough you need this partitioning, and such that you can't delete as
fast as you insert presently, then I'm guessing the database server is
your major bottleneck in overall performance anyway.  In which case
offloading the date computation to the application server is probably a
smart move regardless.

The only complication we have had since moving to this partioning scheme
is that its more complicated to create reports across tables (we have to
deal with both user local time and GMT, so sometimes a daily report will
span two tables).  

John A. McCaskey


-Original Message-
From: Andrew Hill [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 21, 2004 8:00 AM
To: [EMAIL PROTECTED]
Subject: Splitting data across tables


Hi all,

We have quite a large table, and are considering splitting the data into
daily tables - something like tblname_20040721, tblname_20040722, etc.
Hopefully, this will allow us to more easily delete old data through
dropping the old tables, instead of using DELETE. (At present, we're
inserting data into this table faster than we can remove it...)

I was wondering if anyone has any experience and advice to give on going
down this path, that they would be willing to share?

Specifically, I'm concerned by the possibility of inserting data into
the wrong table: Say a client notes that the date is 20040721
23:59:58, and so decides to insert the data into the tblname_20040721
table - and then, when the insert happens, it's now 20040722 00:00:01.
If the table has a timestamp column that is set by the MySQL server,
then the timestamp will be in 20040722, but the data will be inserted
into the table for 20040721.

Obviously, this could also happen should NTP get out of whack...

Perhaps it would be better to insert the timestamp, rather than letting
MySQL set it. Or, is there perhaps a way to let MySQL select which table
to perform the insert into, based on the time?

Any ideas or comments?

TIA,

--
Andrew Hill
Developer
Awarez Ltd.
Kirkman House, 12-14 Whitfield Street, London W1T 2RF
T: +44 (0)20 7299 7370  F: +44 (0)20 7299 7374

--
The information in this email is confidential and is intended solely for
the addressee. It may be legally privileged. The contents are not to be
disclosed to anyone other than the addressee and access to this email by
anyone else is unauthorised. Unauthorised recipients are requested to
preserve the confidentiality of this email and to advise the sender
immediately of any error in transmission. Any disclosure, copying,
distribution or action taken or omitted to be taken in reliance upon the
contents of this email by unauthorised recipients is prohibited and may
be unlawful.
--

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Splitting data across tables

2004-07-21 Thread John McCaskey
As far as I know there is no way to make mysql choose the table to
insert to dynamically.  However, It is probably possible to write a user
defined function that will do so.  You might want to look into that
option.

John A. McCaskey

-Original Message-
From: Andrew Hill [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 21, 2004 8:46 AM
To: [EMAIL PROTECTED]
Subject: RE: Splitting data across tables


Hi,

Okay, cool. A couple of things:

1) We *think* that our bottleneck is RAM and the calculation of indexes
when inserting into the table (about 500 million rows). There's
certainly plenty of CPU and disk I/O left in the machine. Hopefully,
with smaller tables, the inserts will happen faster as well, and getting
timestamps from MySQL won't be an issue.

2) We're lucky with the report generation, in that everything can be
summarised into stats tables reasonably easily :-)

So, I guess that I'm more concerned with ensuring that the times are
correct, in case of the clocks on the clients going crazy. That's why I
was hoping there was a way of making MySQL select the table to insert
into based on the time...

TIA,

--
Andrew Hill
Developer
Awarez Ltd.
Kirkman House, 12-14 Whitfield Street, London W1T 2RF
T: +44 (0)20 7299 7370  F: +44 (0)20 7299 7374

--
The information in this email is confidential and is intended solely for
the addressee. It may be legally privileged. The contents are not to be
disclosed to anyone other than the addressee and access to this email by
anyone else is unauthorised. Unauthorised recipients are requested to
preserve the confidentiality of this email and to advise the sender
immediately of any error in transmission. Any disclosure, copying,
distribution or action taken or omitted to be taken in reliance upon the
contents of this email by unauthorised recipients is prohibited and may
be unlawful.
-- 

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 21 July 2004 16:24
 To: Andrew Hill; [EMAIL PROTECTED]
 Subject: RE: Splitting data across tables
 
 I've had no problems partitioning data in this exact same manner. 
 However my timestamp column is always pre-computed in the application 
 code because it is neccesary to round it to the last 5 minute interval

 so I would not encounter the issue you mention.  I'd recommend simply 
 computing the timestamp in advance.  If you have a database 
 substantial enough you need this partitioning, and such that you can't

 delete as fast as you insert presently, then I'm guessing the database

 server is your major bottleneck in overall performance anyway.  In
 which case offloading the date computation to the application 
 server is probably a smart move regardless.
 
 The only complication we have had since moving to this
 partioning scheme is that its more complicated to create 
 reports across tables (we have to deal with both user local 
 time and GMT, so sometimes a daily report will span two tables).  
 
 John A. McCaskey
 
 
 -Original Message-
 From: Andrew Hill [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 21, 2004 8:00 AM
 To: [EMAIL PROTECTED]
 Subject: Splitting data across tables
 
 
 Hi all,
 
 We have quite a large table, and are considering splitting
 the data into
 daily tables - something like tblname_20040721, tblname_20040722, etc.
 Hopefully, this will allow us to more easily delete old data through
 dropping the old tables, instead of using DELETE. (At present, we're
 inserting data into this table faster than we can remove it...)
 
 I was wondering if anyone has any experience and advice to
 give on going
 down this path, that they would be willing to share?
 
 Specifically, I'm concerned by the possibility of inserting data into 
 the wrong table: Say a client notes that the date is 20040721 
 23:59:58, and so decides to insert the data into the tblname_20040721 
 table - and then, when the insert happens, it's now 20040722 00:00:01.

 If the table has a timestamp column that is set by the MySQL server, 
 then the timestamp will be in 20040722, but the data will be inserted 
 into the table for 20040721.
 
 Obviously, this could also happen should NTP get out of whack...
 
 Perhaps it would be better to insert the timestamp, rather
 than letting
 MySQL set it. Or, is there perhaps a way to let MySQL select 
 which table
 to perform the insert into, based on the time?
 
 Any ideas or comments?
 
 TIA,
 
 --
 Andrew Hill
 Developer
 Awarez Ltd.
 Kirkman House, 12-14 Whitfield Street, London W1T 2RF
 T: +44 (0)20 7299 7370  F: +44 (0)20 7299 7374
 
 --
 The information in this email is confidential and is intended
 solely for
 the addressee. It may be legally privileged. The contents are 
 not to be
 disclosed to anyone other than the addressee and access to 
 this email

RE: Relational Integrity

2004-07-19 Thread John McCaskey
MyISAM tables are sometimes faster than InnoDB, but for most
applications the difference is going to be negligible.  MyISAM tables
also use less disk space (more compressed row format).  These are the
only 2 advantages I'm aware of.  InnoDB on the other hand offers you
foreign keys and transaction support at a small speed penalty.  

The other difference between the 2 is that MyISAM does table level
locking, while InnoDB does row level locking.  This means that while
MyISAM is generally considered faster, it may actually turn out that
InnoDB is faster for you if you have a high level of concurrency
occuring and you end up with table lock contention between
processes/threads in MyISAM.

I switched to InnoDB about 8 months ago and have been very happy with it
ever since.  For me concurrency and table level locking where what
prompted the switch.  Foreign keys and transactions were just icing on
the cake.  For the application you describe I think you will do fine
with either table type.

John A. McCaskey

-Original Message-
From: Roy Harrell [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 19, 2004 2:03 PM
To: [EMAIL PROTECTED]
Subject: Relational Integrity


I need so general guidance on relational integrity.
I'm setting up a reasonably small DB with 30 or so
tables for a machine control application. Several of
the tables will have referential links to each other
(e.g. a finished part table will link to a master
part type table via the product ID number). 
None of my table will ever contain more than a few
hundred thousand records.

This database is a conversion from an existing MS SQL7 
system in which I made extensive use of foreign keys.
SQL7 has worked out well in the past but Windows and
VBNet has ceased to be an efficient machine control
development environment. We have decided
to migrate to Linux on all of our new systems where
practical.

My first stab at a MySQL implementation is to use the
MyISAM table structure and not the InnoDB structure,
foregoing the use of explicit foreign keys and letting
my apps take care of the relational integrity. I gathered
from reading DuBois that this is not an uncommon approach
to a MySQL implementation. Question: Are the advantages
of MyISAM tables vs. InnoDB tables sufficient for me
to continue this approach or am I better off setting
up InnoDB tables throughout?


Thanks in advance for any advice.
 
Sincerely,

Roy Harrell
Adaptive Equipment
2512 NE 1st Blvd #400
Gainesville, FL   32609
352.372.7821
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: INDEX DESC

2004-06-23 Thread John McCaskey
It sounds like the values you want to index our timestamps.  If this is
the case you can do something tricky like using an integer column, and
storing -(unixtimesamp) values so that what mysql sees as ASC will
really be your data in DESC order.  Of course there is some overhead
involved now in doing a unary - operation on each value and in
converting the unixtimestamp to whatever format you need it in.

I wouldn't recommend relying on the index to keep records sorted anyway,
there is no gurantee they will be you really ought to use the order by
even when selecting ASC.

John A. McCaskey


-Original Message-
From: gerald_clark [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, June 23, 2004 1:30 PM
To: Alejandro Heyworth
Cc: [EMAIL PROTECTED]
Subject: Re: INDEX DESC




Alejandro Heyworth wrote:

 I don't see any relationship between your two sentences?
 Thanks for the responses.

 I guess I was not clear enough in my last post.

 You can define an index to sort values in a particular order...  ASC
 is the default.
 DESC is an option, but it is not implemented yet.  I want to know when

 it will be implemented.

 I would like the values in my table to be stored in DESC order.

Values are stored wherever the engine finds room.



 For example, I want to read ranges of values from my db in the
 opposite order of how they are currently being indexed without using 
 an extra ORDER BY in my select. This ORDER BY forces an internal sort 
 to put the result set in DESC order which slows my SELECT.

 Currently, if I insert integer values into a table with an index ASC
 (or DESC) on the INT column , a general SELECT will return the values 
 in ascending order.
 1 2 3 4 5 

Not guaranteed without ORDER BY.



 I want to naturally store the data as:
 5 4 3 2 1
 without adding an ORDER BY int_col DESC

You still must.



 It would make a world of difference to our app since we are dealing
 with huge ranges of joint compression information that need to be read

 out sequentially.

 I hope this is clearer.

 So, does anyone know when it will be implemented?





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Insert data if not duplicate based on order

2004-06-21 Thread John McCaskey
I don't think its possible in one query.  One thing you can do is lock
the table when you select the 20 rows and determine whether to do the
insert.  Then unlock when done.  This avoids the concurrency issue you
are having, but it may cause unacceptable perfomance if you have a lot
of queries hitting this table.  Maybe someone else has a better
solution.

John A. McCaskey

-Original Message-
From: Grant Giddens [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 21, 2004 4:59 PM
To: [EMAIL PROTECTED]
Subject: Insert data if not duplicate based on order


Hi,

  I have a table that has many rows.  I want to be
able to insert a new row only if has a unique field of
the last 20 rows when I sort them by date.

Currently before I insert a new row, I will select a
specific field for 20 rows and run a for loop in php
looking for a match with the data I'm getting ready to
insert.  If a match occurs I will skip the insert
command.

This normally works fine, but I have seen occasions
when 2 people visit my website at the same exact time.
 In this case, sometimes I might insert the data twice
because I think I'm spinning in the for loop for 2
site visitors simultaneously.

I know this is bad syntax, but this is what I imagine
a insert query to look like:


INSERT INTO $table (datax, datay, dataz)
   VALUES ('x', 'y', 'z') IF 'x' unique WHEN
ordering by 'z' DESC LIMIT 20

In this example, 'z' is a datetime field and 'x' is a
varchar.  I don't mind duplicate entries in my table,
but I don't want duplicates entries to occur within 20
rows when sorting by date.

Is this possible to do with one INSERT command?  I saw
the IGNORE statment in the mysql manual, but I don't
think it really aplies to what I'm trying to do here.

Any suggestions would be appreciated,
Thanks,
Grant 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: GROUP BY across UNION

2004-06-18 Thread John McCaskey
Hi Mike, 

This is a good suggestion.  We ended up changing the requirements to not
require the functionality I was trying to develop at the time.  However,
I did just change a temporary table I'm using for a similar process to
HEAP and saw a very nice perfomance improvement.  Should have thought
about that before.

Thanks,

John A. McCaskey


-Original Message-
From: Michael McTernan [mailto:[EMAIL PROTECTED] 
Sent: Friday, June 18, 2004 11:16 AM
To: John McCaskey
Cc: [EMAIL PROTECTED]
Subject: RE: GROUP BY across UNION


Hi John,

Depending on the size of your datasets, you could merge the data into a
TEMPORARY table and then compute from there?  If the temp table is small
enough it will fit in RAM as a heap table, and will probably be more
efficient than fetching all the results and computing them in code.  Of
course, if the dataset is large enough, the temporary table will hit the
disc, and then it will be very inefficent though.

Thanks,

Mike

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 24 February 2004 17:08
 To: [EMAIL PROTECTED]
 Subject: RE: GROUP BY across UNION


 Nope, this would yield a 'Column: 'avg' in field list is ambiguous' 
 error.  I'm still trying to work out a better way of doing this.  I 
 also need to get standard deviations now, and the method I mentioned 
 in my original post doesn't even work for that.  So now I'm left with 
 actually getting all the values and computing them in code.  Very sub 
 optimal.

 John A. McCaskey


 -Original Message-
 From: Ligaya Turmelle [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, February 18, 2004 6:30 PM
 To: [EMAIL PROTECTED]
 Subject: Re: GROUP BY across UNION


 I am pretty new myself but can't you do it with a join? Like SELECT
 AVG(avg) FROM table_a, table_b GROUP BY id_field.

 Respectfully,
 Ligaya Turmelle

 John McCaskey [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 First of all I am using mysql 4.0.18 and I am not free to upgrade to 
 4.1 or 5.0.



 I have two tables:



 table_a, and table_b these two tables have the same structure:

 CREATE table_a (

   id_field mediumint(8) unsigned NOT NULL,

   avg float default NULL

 )



 What I want to do is get the combined avg across table_a and table_b 
 for each id value.  So I tried doing

 (SELECT AVG(avg) FROM table_a)

 UNION

 (SELECT AVG(avg) FROM table_b)

 GROUP BY id_field;



 This however doesn't work.  I can see why this is thinking about how a

 union should work mathematically but I'm left with no apparent way to 
 directly get the avg across two tables.  I know that I can take the 
 avg from each along with the row count and then do a weighted average 
 using those values, but I'd prefer not to have to do that.  Any 
 suggestions or am I stuck doing individual group bys on each table 
 uninoning the results and then doing weighted averages on the matching

 pairs?



 Here is what I'm talking about doing with the weighted averages incase

 it is unclear:

 (SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a 
 GROUP BY id_field)

 UNION

 (SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b 
 GROUP BY id_field);



 Then I would match up the results and compute total_avg = 
 (avg_a*count_a
 + avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
 would like.







 John A. McCaskey






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: 
 http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to Fix Broken Replication

2004-06-14 Thread John McCaskey
Give the below a shot:

slave stop;
set global sql_slave_skip_counter=1;
slave start;

Good luck.

John A. McCaskey

-Original Message-
From: Henry Chang [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 14, 2004 3:34 PM
To: [EMAIL PROTECTED]
Subject: How to Fix Broken Replication



I have MySQL replication setup between Master A and Slave B. Everything
was working fine, until one day replication was broken. Status on the
slave shows the following error message:

Last_error: Error 'Can't drop database X.  Database doesn't exist'
on query 'DROP DATABASE X.

I suspect another admin probably deleted or moved the database from the
linux command line.  At this point, how can I tell Slave B to jump over
this drop command and resume replication at the next command??  Or is
re-installing replication the only option on Slave B??

Any help would be greatly appreciated.

Master A  Slave B
MySQL version 4.0.18
Red Hat 9


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error 1054

2004-05-26 Thread John McCaskey
Unless outlook is just formatting your message strangley it looks like
the actual name of the ID field is `ID  ` with two space char's
included.  As such you probably want to reccreate the table using `ID`
in the create statement so that it will get created as you expect
without these extra chars.

John A. McCaskey

-Original Message-
From: kaustubh shinde [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, May 26, 2004 1:11 PM
To: Michael Kruckenberg
Cc: [EMAIL PROTECTED]
Subject: Re: Error 1054


Hi,
Following is the output of show create table Spot
command

| Table | Create Table
  
  
  
   |
+---+---
+---+




--+
| Spot  | CREATE TABLE `Spot` (
  `Name` varchar(32) default NULL,
  `ID  ` int(11) NOT NULL default '0',
  `Row` varchar(32) default NULL,
  `Column` varchar(32) default NULL,
  `Probe_ID` int(11) NOT NULL default '0',
  `Array_Type_ID` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID  `)
) TYPE=MyISAM |

As you can see ID was actually stored as `ID  `.
My mistake. Thanks a lot for all the replies. learnt
something new today! :)
Thanks again
Kaustubh
--- Michael Kruckenberg
[EMAIL PROTECTED] wrote:
 Maybe a long shot, but creating this table should
 have required using
 backticks around `Column` because it's a reserved
 word. Made me wonder 
 if the ID field in the create statement was in
 backticks as well and 
 maybe has an extra character.
 
 What does show create table Spot show?
 
  Hi,
  I have a table Spot in the database as follows
  
   desc Spot
  - ;
 

+---+-+--+-+-+---+
  | Field | Type| Null | Key |
 Default |
  Extra |
 

+---+-+--+-+-+---+
  | Name  | varchar(32) | YES  | | NULL 
   |
|
  | ID| int(11) |  | PRI | 0
   |
|
  | Row   | varchar(32) | YES  | | NULL 
   |
|
  | Column| varchar(32) | YES  | | NULL 
   |
|
  | Probe_ID  | int(11) |  | | 0
   |
|
  | Array_Type_ID | int(11) |  | | 0
   |
|
 

+---+-+--+-+-+---+
  
  When I try to access the field ID shown above
 though,
  it gives me error:
  mysql select ID from Spot;
  ERROR 1054: Unknown column 'ID' in 'field list'
  
  This is kind of weird. I will really appreciate
 any
  ideas
  Thank you,
  Kaustubh
  
  
  
  
  
  __
  Do you Yahoo!?
  Friends.  Fun.  Try the all-new Yahoo! Messenger. 
  http://messenger.yahoo.com/
  
 
 
 --
 http://mike.kruckenberg.com |
 [EMAIL PROTECTED]
 
 





__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: sql_no_cache

2004-04-27 Thread John McCaskey
One possibility is that the OS has the portion of disk that the row is
stored in cached in memory via its normal disk caching after the first
execution.  Another possibility is that the key for the table is in
mysql's key_buffer after the first execution.  If you are using innodb
then it might be cached in the buffer_pool. There are quite a few levels
of caching going on at the mysql and os level and they all need to be
considered.

John A. McCaskey


-Original Message-
From: Boyd E. Hemphill [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 27, 2004 12:41 PM
To: [EMAIL PROTECTED]
Subject: sql_no_cache


I am trying to turn of the query caching for select queries I am testing
as I would like to rerun the as if they were the first hit.  

The query-cache-type = 1.

I am suspicious b/c I run a query and it takes 12 seconds.  I then run
the same query with no changes and it takes .17 seconds.  

Another piece to this puzzle might be that when I issued the FLUSH QUERY
CACHE command and then reran the query and it still took a very short
time.  Since this is a devel server and I am the only one around I don't
think there are any other obvious things going on.  Mytop is clear of
processes.

Am I missing something?

Boyd E. Hemphill


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Last Record Pulling my hair out :D

2004-04-22 Thread John McCaskey
Try, SELECT * FROM job_log_2004 ORDER BY JobID DESC LIMIT 1.  This is
simpler than your sub select method and probably faster.  Sub selects
don't work in MySQL 4.x so if you are not using a newer beta build that
is probably why it fails.

John A. McCaskey

-Original Message-
From: James [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 22, 2004 12:43 PM
To: [EMAIL PROTECTED]
Subject: Last Record Pulling my hair out :D


I'm trying to select the last record (row) in a db.  I'm trying with a
select:
Select * from job_log_2004 where job_log_2004.JobID = (select
max(job_log_2004.JobID) from job_log_2004) and it fails.  The part about
select max works and returns a number. What on earth am I doing wrong?
Or is there an easier way to return the last record? Thanks

Thanks, 
James 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: What is Frequency of Master Binlog Dump to Slave

2004-03-22 Thread John McCaskey
The master pushes data to the slave as soon as it has executed the query
itself.  It is not a periodic push, but an asyncrounous push as soon as
data is ready to be sent.  So the gap would only be as great as the
latency between your two servers.

If the servers are disconnected or unable to communicate for any reason
you need to be careful about your timeout values and connection retry
values or they may not attempt to reconnect for a while and then there
will be a datagap until they attempt to reconnect.

John

-Original Message-
From: Henry Chang [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 22, 2004 11:32 AM
To: [EMAIL PROTECTED]
Subject: What is Frequency of Master Binlog Dump to Slave



I got MySQL replication working in  master-slave configuration.  It's
really cool, but how often does the master send binlog dump to the
slave.  The implication is if the master crashes, what would be the
potential data gap in the slave??


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication / Synchronizing DB across different machines

2004-03-01 Thread John McCaskey
I believe Gowtham's response is accurate.

If the link goes down it is going to unclear which update becomes the
final one, and it will be possible for one side to contain the row after
one update, and the other to contain the row after the other update,
thus being out of sync.

Mysql has no support for resolving these types of conflicts so you would
need to ensure updates are done in a safe manner at the application
level.  As far as inserts an autoincrement collumns are concerned one
way to handle it is to use a unique server id and create a combined key
(auto_increment_field, server_id).  

If all you really need is a failover server and you don't need to do
active/active load balancing then you can avoid these type of update
issues fairly easily.

As for whitepapers, I'm not aware of any good ones, but the replication
FAQ in the mysql manual provides a good starting point for this type of
2 way replication setup.

John 

-Original Message-
From: Gowtham Jayaram [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 01, 2004 1:22 PM
To: [EMAIL PROTECTED]
Subject: Re: Replication / Synchronizing DB across different machines 


 But what happens if there is a break of
 communication and the same record on both machines
 is modified?  For example, if the sync link is ?
 down, and I update record #1 on A, while at the 
 same time, another user is modifying record #1 on
 server B.  When the link comes back online, how 
 will the servers be able to synchronize record #1,
 or in the case that this is impossible (I can't 
 even imagine how I would do that manually - merge? 
 A overwrite B?  B overwrite A?), how does MySQL 
 flag it and let me know that there is a discrepancy

 in the two DBs?

From my understanding (please bear in mind I am new to
MySQL replication);

MySQL Replication does not support conflict
resolution.  So in the sceanrio wherein you have dual
masters actively updating the same data the conflict
resolution onus falls on the Application using MySQL
process.

Also, there is no need for the link between 'A' and
'B' to go down for this issue to surface.  The fact
that you have a dual-master setup is reason enough for
the problem you have explained above to surface.
  
Gowtham







__
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Replication / Synchronizing DB across different machines

2004-02-29 Thread John McCaskey
Yes,
 
Lets call the two serves A, and B.  You set A as the master for B, and you set B as 
the master for A.  In this way you can insert/delete/update on either side and both 
sides will be kept in sync.  If you plan to actively use both at the same time you do 
need to be weary of some syncronization issues however (for example autoincrement 
collumns may be unsafe if you are inserting to both at the same time).  
 
So the answer to your question is that getting the slave to syncronize back to the 
master is not a nightmare, because both can be masters and slaves to each other and 
then this will all be handled automatically as soon as the servers are able to 
reconnect to each other.  
 
John



From: news on behalf of Eric B.
Sent: Sun 2/29/2004 9:51 PM
To: [EMAIL PROTECTED]
Subject: Replication / Synchronizing DB across different machines



Hi, 

I'm at the stage where I need to create redundancy in my network / DB 
server.  Am currently using MySQL 4.0.x, and have been trying to figure out 
how I can safely replicate/synchronize between different machines.  I know 
there is a way to do a Master / Slave replication, but I'm looking more for 
something that will allow 2 way replication. 

My problem is that I need to have my DB always available.  If my primary 
server (master) goes down, I need another server (hosted in a different 
geographical location) to pick up the slack until my primary server comes 
back alive.  If I do a simple master/slave replication, getting the slave to 
synchronize back to the master once the master comes back alive is a 
nightmare. 

MS SQL allows full 2 way synchronization between different servers.  Is 
there anything I can do in MySQL that will produce a similar result? 

Thanks, 

Eric 




-- 
MySQL General Mailing List 
For list archives: http://lists.mysql.com/mysql 
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] 



RE: GROUP BY across UNION

2004-02-24 Thread John McCaskey
Nope, this would yield a 'Column: 'avg' in field list is ambiguous'
error.  I'm still trying to work out a better way of doing this.  I also
need to get standard deviations now, and the method I mentioned in my
original post doesn't even work for that.  So now I'm left with actually
getting all the values and computing them in code.  Very sub optimal.

John A. McCaskey


-Original Message-
From: Ligaya Turmelle [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 18, 2004 6:30 PM
To: [EMAIL PROTECTED]
Subject: Re: GROUP BY across UNION


I am pretty new myself but can't you do it with a join? Like SELECT
AVG(avg) FROM table_a, table_b GROUP BY id_field.

Respectfully,
Ligaya Turmelle

John McCaskey [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
First of all I am using mysql 4.0.18 and I am not free to upgrade to 4.1
or 5.0.



I have two tables:



table_a, and table_b these two tables have the same structure:

CREATE table_a (

  id_field mediumint(8) unsigned NOT NULL,

  avg float default NULL

)



What I want to do is get the combined avg across table_a and table_b for
each id value.  So I tried doing

(SELECT AVG(avg) FROM table_a)

UNION

(SELECT AVG(avg) FROM table_b)

GROUP BY id_field;



This however doesn't work.  I can see why this is thinking about how a
union should work mathematically but I'm left with no apparent way to
directly get the avg across two tables.  I know that I can take the avg
from each along with the row count and then do a weighted average using
those values, but I'd prefer not to have to do that.  Any suggestions or
am I stuck doing individual group bys on each table uninoning the
results and then doing weighted averages on the matching pairs?



Here is what I'm talking about doing with the weighted averages incase
it is unclear:

(SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a GROUP
BY id_field)

UNION

(SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b GROUP
BY id_field);



Then I would match up the results and compute total_avg = (avg_a*count_a
+ avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
would like.







John A. McCaskey






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MERGE table across InnoDB tables?

2004-02-24 Thread John McCaskey
I attempted to create a merge table across two identical InnoDB tables
and it seemed to work without error.  But when I went to use the table I
got a file not found my_table.MRG (error number 2) error.  The file does
in fact exist, but it is only 54 bytes. 

 

I'm guessing that I can only create a merge table with MyISAM tables.
Is this correct?  The manual says that a merge table is a collection of
MyISAM tables but it doesn't explicitly state that it doesn't work with
other types.

 

 

John A. McCaskey

 



GROUP BY across UNION

2004-02-18 Thread John McCaskey
First of all I am using mysql 4.0.18 and I am not free to upgrade to 4.1
or 5.0.

 

I have two tables:

 

table_a, and table_b these two tables have the same structure:

CREATE table_a (

  id_field mediumint(8) unsigned NOT NULL,

  avg float default NULL

)

 

What I want to do is get the combined avg across table_a and table_b for
each id value.  So I tried doing

(SELECT AVG(avg) FROM table_a) 

UNION

(SELECT AVG(avg) FROM table_b)

GROUP BY id_field;

 

This however doesn't work.  I can see why this is thinking about how a
union should work mathematically but I'm left with no apparent way to
directly get the avg across two tables.  I know that I can take the avg
from each along with the row count and then do a weighted average using
those values, but I'd prefer not to have to do that.  Any suggestions or
am I stuck doing individual group bys on each table uninoning the
results and then doing weighted averages on the matching pairs?

 

Here is what I'm talking about doing with the weighted averages incase
it is unclear:

(SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a GROUP
BY id_field) 

UNION

(SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b GROUP
BY id_field);

 

Then I would match up the results and compute total_avg = (avg_a*count_a
+ avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
would like.  

 

 

 

John A. McCaskey

 



RE: Query matching

2004-02-06 Thread John McCaskey
Yes, I think the most straight forward way is to simply put in a series of
grouped OR statements.  See below.

SELECT * from pages WHERE
changelog.agent = pages.agent AND
changelog.company = pages.company AND
changelog.magazine = pages.magazine AND
(
changelog.orig_id = pages.mls_1 OR
changelog.orig_id = pages.mls_2 OR
changelog.orig_id = pages.mls_3 OR
changelog.orig_id = pages.mls_4 OR
changelog.orig_id = pages.mls_5 OR
changelog.orig_id = pages.mls_6 OR
changelog.orig_id = pages.mls_7 OR
changelog.orig_id = pages.mls_8 OR
changelog.orig_id = pages.mls_9 OR
changelog.orig_id = pages.mls_10 OR
changelog.orig_id = pages.mls_11 OR
changelog.orig_id = pages.mls_12
)

John A. McCaskey



-Original Message-
From: Ed Curtis [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 06, 2004 10:20 AM
To: [EMAIL PROTECTED]
Subject: Query matching



 I've been challenged to write a matching query in a project and do not know
how to handle a part of it. The criteria are as follows:

SELECT * from pages WHERE

changelog.agent = pages.agent AND
changelog.company = pages.company AND
changelog.magazine = pages.magazine

Now for the challenging part for me at least.

one of the following must at least be true for the query to return a result.

changelog.orig_id = pages.mls_1
changelog.orig_id = pages.mls_2
changelog.orig_id = pages.mls_3
changelog.orig_id = pages.mls_4
changelog.orig_id = pages.mls_5
changelog.orig_id = pages.mls_6
changelog.orig_id = pages.mls_7
changelog.orig_id = pages.mls_8
changelog.orig_id = pages.mls_9
changelog.orig_id = pages.mls_10
changelog.orig_id = pages.mls_11
changelog.orig_id = pages.mls_12

Would I nest these as an OR statement and how would I go about it?

Thanks,

Ed Curtis



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: updates on slave server??

2004-01-27 Thread John McCaskey
The updates on the slave will not syncronize to the master if you have a one
direction master-slave relationship setup.  It is possible however to setup
each server as a master and as a slave so that server 1 is the master for
server 2 and server 2 is the master for server 1.  Then queries will
propogate in both directions.  This may be what you are looking for.  

If you setup such bi-directional replication and you are executing queries
at the same time on both servers you need to be careful about things like
auto-index collumns getting out of sync and causing conflicts due to the
latency in replication.

John A. McCaskey


-Original Message-
From: Mike Mapsnac [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 26, 2004 4:50 PM
To: [EMAIL PROTECTED]
Subject: updates on slave server?? 


Hello

I found today that some data on slave server were different than on master 
server. I don't know how that can happen. Slave server was running, show 
slave status show no errors

If I make some updates on slave server. Does this updates will be reflected 
on
master server? Or slave cannot synchronize master server?

Thanks

_
Rethink your business approach for the new year with the helpful tips here. 
http://special.msn.com/bcentral/prep04.armx


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Memory leaks using MySQL C Api

2004-01-18 Thread John McCaskey
Agreed, I am not calling mysql_store_result().  I attempted to add
my_free() but the function does not seem to exist, it is also not listed
in the API docs for the c api.  As such it still seems that there should
be no leak, but yet I do get one.  Thanks for the idea anyway Chris,
maybe you can clarify for us once more.

John McCaskey

On Sat, 2004-01-17 at 12:44, Aftab Jahan Subedar wrote:
 Hey wait a minute. Where did you get the my_free(), may be you are 
 trying to say mysql_free(), but then that is used only if result set is 
 used/called.
 
 But the code does not show any result set call. ie. mysql_use_result() 
 or mysql_store_result().
 
 So, the question now, how come there is a leak here. I dont see any, 
 does anyone see any?
 
 Chris Nolan wrote:
 
  Hi!
  
  You're looking for the function my_free(). Enjoy!
  
  Regards,
  
  Chris
  
  John McCaskey wrote:
  
  I have the following code:
 
 
 
 //try the mysql connection
 
 mysql_init(mysql_connection);
 
 if(!mysql_real_connect(mysql_connection, db_host, db_user, 
  db_pass,
  db_db, 0, NULL, 0)) {
 
 flockfile(stderr);
 
 fprintf(stderr, %s: Failed to connect to database: Error:
  %s\n, timestamp, mysql_error(mysql_connection));
 
 funlockfile(stderr);
 
 mysql_close(mysql_connection);
 
 return(2);
 
 }
 
 
 
 mysql_close(mysql_connection);
 
 
 
  This code is creating a memory leak.  Am I missing some cleanup calls? 
  I'm
  under the impression all I should need to do is call mysql_close to 
  clean up
  the connection?  I'm testing this using mtrace, if I place a return 
  directly
  above the code segment it reports no leaks, if I place it direcly 
  below the
  fragment there are several variables reported as not being freed.  Any
  ideas?
 
 
 
  John A. McCaskey
 
 
 
 
   
 
  
  
 
 -- 
 
 Aftab Jahan Subedar
 CEO/Software Engineer
 Subedar Technologies
 Subedar Baag
 Bibir Bagicha #1
 North Jatrabari
 Dhaka 1204
 Bangladesh
 http://www.SubedarTechnologies.com
 http://www.DhakaStockExchangeGame.com/
 http://www.CEOBangladesh.com/
 http://www.NYSEGame.com
 tel://+88027519050
 EMail://[EMAIL PROTECTED] - Directly to my notebook
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Memory leaks using MySQL C Api

2004-01-16 Thread John McCaskey
I have the following code:

 

//try the mysql connection

mysql_init(mysql_connection);

if(!mysql_real_connect(mysql_connection, db_host, db_user, db_pass,
db_db, 0, NULL, 0)) {

flockfile(stderr);

fprintf(stderr, %s: Failed to connect to database: Error:
%s\n, timestamp, mysql_error(mysql_connection));

funlockfile(stderr);

mysql_close(mysql_connection);

return(2);

}

 

mysql_close(mysql_connection);

 

This code is creating a memory leak.  Am I missing some cleanup calls? I'm
under the impression all I should need to do is call mysql_close to clean up
the connection?  I'm testing this using mtrace, if I place a return directly
above the code segment it reports no leaks, if I place it direcly below the
fragment there are several variables reported as not being freed.  Any
ideas?

 

John A. McCaskey

 



RE: Change from loop to single query

2004-01-02 Thread John McCaskey
Try forming the query with only the first array element, then iteratring
through the rest concatinating OR clauses onto the end of the query.  And
then after the loop sending the query to the db. See my below pseudo code.

String query = UPDATE users SET status = no WHERE name = array[0]

While(array[1:end] contains elements) {
query = query +  OR name = array[i]
}
Execute(query);

John A. McCaskey
Software Development Engineer
IP Sciences, Inc.
[EMAIL PROTECTED]
206.633.0449


-Original Message-
From: Jonathan Villa [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 02, 2004 9:17 AM
To: [EMAIL PROTECTED]
Subject: Change from loop to single query


I have a loop which is similar to the following:

while(array contains elements) {
UPDATE users SET status = no WHERE name = array[i]
}
great, it works but the query runs many times. I want to make only one call
to the database and have all the elements in the array be included in the
query



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication inconsistency questions

2003-12-15 Thread John McCaskey
Hello, I'm currently testing out replication on a high volume innodb based
database.  This has been working great for several weeks, but when I came in
this morning I found my slave had the following error:

 

031212 23:30:24  Slave: query 'UPDATE monitor_tunnel_cisco_phase_2 SET
counter=0, timestamp=NOW()

 
WHERE monitor_id=3735 AND monitor_server_id=1 AND phase_2_id=6290' partially

completed on the master and was aborted. There is a chance that your master
is inconsistent at this point. If you are sure that your master is ok, run

this query manually on the slave and then restart the slave with SET GLOBAL
SQL_SLAVE_SKIP_COUNTER=1; SLAVE START; . Error_code: 1053

031212 23:30:24  Error running query, slave SQL thread aborted. Fix the
problem, and restart the slave SQL thread with SLAVE START. We stopped at
log

 'slave_test_binlog.169' position 6404579

 

I was able to resolve this easily by running the query on the slave, setting
the skip counter to 1, and restarting the slave.  However I would like to
know why this error was caused.  I can find no log of any such error on my
master.  Should a failed query on the master have shown up in my
/var/log/mysql/mysql.err?  What would cause the query to have partially
completed and aborted?  Everything seems fine on the master so I'm not sure
why the query was aborted.  

 

My master is running 4.0.13 and my slave is running 4.0.16 could this be the
cause of the error?

 

John A. McCaskey

Software Development Engineer

IP Sciences, Inc.

[EMAIL PROTECTED]

206.633.0449