Ok. Thank you for all your help! I will perform some tests to see how to get the best 
results. 
Thank you for offering to help with this stage, but I'll try doing it by myself.


Remi Mikalsen

E-Mail: [EMAIL PROTECTED]
URL:    http://www.iMikalsen.com



On 5 Oct 2004 at 14:11, [EMAIL PROTECTED] wrote:

> 
> 
> The only way to know for sure is to try both methods and compare performance.  I 
> have 
had 
> excellent response times from my temporary tables (especially since you can add 
> indexes 
to 
> them). If you would like some help in crafting some test queries, just post the SHOW 
CREATE 
> TABLE results for the tables you want along with a good description of the data you 
> want 
and I or 
> someone else would be happy to help. 
> 
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine 
> 
> "Remi Mikalsen" <[EMAIL PROTECTED]>wrote on 10/05/2004 01:51:55 PM:
> 
> > Ok... I've taken a close look at the various options.
> > 
> > * The use of a temporary table would solve the problem. However, I 
> > am very uncertain of the 
> > efficiency of the use of temporary tables the way you suggest. I 
> > would need 6 queries to the 
> > database, plus creation of a new table based on another that might 
> > contain more than 
> > 1.000.000 records. Of course, the content of the temporary table 
> > would be very reduced, 
> > probably never more than a couple of thousands records. Is it 
> > possible to estimate the time 
> > needed for these operations? Can I be certain that this is more 
> > efficient than using two 
> > queries with scripting in between, the way I described before? I 
> > guess the main difference 
> > consists in 4 extra queries (creating, dropping, locking and 
> > unlocing tables) vs. reading 
> > 1.000.000 records with PHP. I would believe that using MySQL is 
> > always better, but I am 
> > not sure; have never had the chance to compare these things on big 
> tables. 
> > 
> > * I'm running version 4.0.21, which rules out suggestion 2 and 3.
> >   - this is what I would have done things using any SQL-99 
> compliant DBS.
> > 
> > * The 4th suggestion also solves my problem; this one was completely
> > new to me. But I 
> > realise that some creative mind came up with this due to the lack of
> > a reasonable solution in 
> > MySQL. Would it be more or less efficient than the 1st? I'm not very
> > keen on the idea of 
> > locking tables, so if the efficiency penalty using this solution is 
> > relatively small (<15-20%) 
> > compared to the first option, and as long as it is faster than my 
> > current solution, I would 
> > prefer it.
> > 
> > Thanks a lot for all these pointers!
> > 
> > 
> > Remi Mikalsen
> > 
> > E-Mail:  [EMAIL PROTECTED]
> > URL:  http://www.iMikalsen.com
> > 
> > On 5 Oct 2004 at 9:45, [EMAIL PROTECTED] wrote:
> > 
> > > You are correct, there is no predicate you can use to pull the 
> refurl 
> > > value from the same row that the MAX(entry) value comes from . 
> That will 
> > > require a second processing stage. Depending on what version of 
> MySQL you 
> > > are running, you have some options:
> > > 
> > > *store the results of this MAX() query in a temp table and join 
> those 
> > > results back to the userlog table to get the other fields you need 
> (like 
> > > refurl) where the dates and user match.
> > > 
> > > *use the "anonymous view" feature to skip the step of creating a 
> temp 
> > > table (that's where you surround a query with parentheses and use 
> it like 
> > > any other table in an outer query. Don't forget to alias your 
> "view"). 
> > > This functionality is a side effect of the parsing engine's 
> treatment of 
> > > parentheses in a query and is available as of 4.1 (it arrived with 
> the 
> > > UNION statement) 
> > > 
> > > *You may be able to use a subquery to get matching non-aggregated 
> values.
> > > 
> > > *You may be able to use the MAX-CONCAT trick as described in the 
> manual.
> > > 
> > > Shawn Green
> > > Database Administrator
> > > Unimin Corporation - Spruce Pine
> > > 
> > > "Remi Mikalsen" <[EMAIL PROTECTED]> wrote on 10/04/2004 05:21:44 
> PM:
> > > 
> > > > If you aren't tired of this problem, I have one more question...
> > > > 
> > > > The last query worked well, except one small detail... 
> > > > SELECT user, refurl, max( entry ) AS most_recent, count( user ) 
> AS 
> > > visits 
> > > > FROM userlog 
> > > > WHERE user != '' 
> > > > GROUP BY user 
> > > > ORDER BY most_recent DESC 
> > > > LIMIT 10 
> > > > 
> > > > I am now including ONE extra field: "refurl". MySQL doesn't grab 
> the
> > > > refurl from the SAME 
> > > > row as it grabs the last entry time! I would like to be able to 
> get 
> > > > this extra field (and maybe 
> > > > others too), and as they are potentially different from time to 
> > > > time, just any of them won't do 
> > > > it.
> > > > 
> > > > I could probably get around the problem with a join on the time 
> > > > column (maybe I should 
> > > > create an index to enhance efficiency?), but I wonder if there 
> is a 
> > > > more efficient way to solve 
> > > > this? Additionally, IF there were two entries at the same time 
> > > > (which is highly improbable, 
> > > > and not critical at all if it happens once in a million) I might 
> not
> > > > get the correct result anyway. 
> > > > Maybe I'm being picky, but while I'm at it it might as well get 
> well 
> > > done. 
> > > > 
> > > > Thank you for the help so far anyway!! 
> > > > 
> > > > 
> > > > Remi Mikalsen 
> > > > 
> > > > E-Mail:  [EMAIL PROTECTED] 
> > > > URL:  http://www.iMikalsen.com 
> > > > 
> > > > 
> <<major snippage>> 
> > -- 
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:  
> http://lists.mysql.com/[EMAIL PROTECTED]
> > 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to