Re: Some BLOB help please.

2004-06-22 Thread Michael Stassen
Justin Crone wrote:

So as I said, I am rather pleased with the performance and the ease at 
which I was able to get this up and running. However The problem is 
I do have limits, and one of those being disk space. Those 10,000 files 
are taking up 21 GB of space in the database. However the actual space 
required by the files is around 5GB on the file system. The average file 
size is about 1.9MB, so it would seem that each row inserted into the 
database is conforming to that 1.9MB average, giving me this 21GB table.
Could you explain that again?  If average file size is 1.9Mb, then 21 Gb for 
10,780 files is about right.  On the other hand, if the total is 5 Gb, then 
5Gb/10,780 yields about .47 Mb average per file.  So which is it?  Do your 
files average 1.9Mb, in which case we must wonder how you stored them in 
only 5Gb, or do your files average .47 Mb, in which case we must figure out 
why they are roughly 4 times as big when stored as BLOBs?  (The manual 
 says BLOBs 
take length + 2bytes to store .)

I would like to know if there is something that I can change to get these
numbers in line with each other, or if this is just the way of things.
Current projections for the total documents needed to complete the rotation
of these files is 720,000 documents. Which if the 1.9MB average keeps, that
puts me in the neighborhood of 1.4TB of storage.
Even at .47Mb per file, that's about 330 Gb of storage required just for the 
720,000 files.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Some BLOB help please.

2004-06-22 Thread Justin Crone
Hello All,
I have been reading hundreds of posts and sites for information 
regarding BLOBs in MySQL. So far I have not found an answer to my 
question, so I pose it to you fine people. :)

A little background first. I have a web server that uses PHP to retrieve 
documents in PDF format for viewing and printing. When I first developed 
the application I was storing the files on the file system. However 
about 7 months into the project, my file system was starting to get all 
but unmanageable. At 175,000 documents, I decided I needed a new 
mechanism for storing the files. So I began to use BLOBS.

I have started the project again, with BLOBs not yet importing the 
documents from the old project. Currently I have 10,780 files in the 
database, and all is working  excellently. I have 3 tables, one for the 
BLOBs with unique keys, 2 table with the information regarding the BLOB, 
and a 3 table with the actual relevant information to the document.

So as I said, I am rather pleased with the performance and the ease at 
which I was able to get this up and running. However The problem is 
I do have limits, and one of those being disk space. Those 10,000 files 
are taking up 21 GB of space in the database. However the actual space 
required by the files is around 5GB on the file system. The average file 
size is about 1.9MB, so it would seem that each row inserted into the 
database is conforming to that 1.9MB average, giving me this 21GB table.

I would like to know if there is something that I can change to get 
these numbers in line with each other, or if this is just the way of 
things. Current projections for the total documents needed to complete 
the rotation of these files is 720,000 documents. Which if the 1.9MB 
average keeps, that puts me in the neighborhood of  1.4TB of storage.

Any thoughts?
Thanx,
Justin
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance issues

2004-06-22 Thread mos
At 12:34 PM 6/22/2004, you wrote:

We have an internal SNMP monitoring system that is monitoring about 10,000 
devices.  Each device is pinged then pulled for about an average of 25-30 
elements.  Each of the ping results and elements are then stored in text 
file, then another system picks them up (NFS) and inserts them into a 
MyISAM (3.23.54) database.  The data is kept for 13 weeks.

The database system is a Xeon 4 way, 12GB of ram with a striped raid array 
dedicated to the database files and its indexes and such.

Every 5 minutes another process goes through the last set of inserts and 
compares them for any threshold breaches, so the entire last set of data 
is looked at.

We're falling behind on the inserts because the system can't seem to 
handle the amount of inserts, the front end that generates the web pages 
based on the previous records is dogging down.
Have you tried "Load Data Infile"? It is for loading data from a text file 
into a table and is much faster than using "Insert ..." statements. For 
example, I can load 1 million rows of x(30) into a MyISam table in 15 
seconds on a P4 2.4ghz machine. You can use either Ignore or Replace to 
handle duplicate indexes.

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL & Web Clustering...

2004-06-22 Thread Terence
Apparently OpenMosix won't work with MySQL because MySQL uses Shared Memory.
There is apparently a component called MAASK which might help.

Roy Nasser wrote:
Hi All,
We have recently acquired some new machines for our ASP service, and I
am investigating different options and setups to optimize everything.
We currently have one large DB server, with RAID5, etc, running mysql
and a few smaller servers for web applications, and e-mail.  These
smaller servers arent all identical in their software, and they run
different services.
We currently have reached a certain limit in the DB as well as in some
of our applications on the webservers, hence the need for something
expandable.
I have read slightly about MySQLCluster, as well as some other solutions
such as openMosix, Mosix and LVS.  I was wondering if you guys have
experience, and what you would recommend for the DB and for the
webservers.  I still want to maintain them separate, even if I end up
having 2 different clusters, I prefer to keep the DB away from the
application for security reasons.
Thanks!
Roy
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: How do you deal with URL's?

2004-06-22 Thread Robert A. Rosenberg
At 22:38 -0300 on 06/21/2004, Sergio Salvi wrote about Re: How do you 
deal with URL's?:

Separate data from how it's displayed. I mean, create a table called
"states" with the fields "state_id", "state_name" and "state_url". Put
the data in the according field:
state_id state_name state_url
1 Alabama http://www.alabama.gov
2 Washington http://access.wa.gov
...and so on
Instead of a auto_increment state_id, go with the USPS 2 letter code 
(AL, WA, etc). That way you can pull up the state name from an 
address.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)

2004-06-22 Thread Kevin Brock
On Jun 18, 2004, at 5:31 PM, Scott Haneda wrote:
While I do not know why, I would suggest you simply drop the PK and 
recreate
it, this should be a whole lot faster than the alter.
This took the same amount of time as the alter table (a little longer 
actually).  The documentation says that in later versions of MySQL the 
drop index is mapped to alter table, so I'd pretty much expect it to 
work that way.

Kevin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Remote Procedure Calls or DRDA

2004-06-22 Thread Donna Croland
Does mysql support remote procedure calls or DRDA?  I have partitioned 
my database between multiple servers and need to join two tables from 
two different servers (mysql instances) and I am wondering what the 
best method for doing this would be.  Any help would be greatly 
appreciated!

Thanks in Advance,
Donna
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Data spikes and buffering inserts...

2004-06-22 Thread Alejandro Heyworth
Hi!
My MyISAM database works perfectly for 98% of the cases that we encounter.
However, there are a few cases during our experiment that we know will 
cause spikes in dataflow.  We know both the why and when, so we are trying 
to plan appropriately for it.

Is there a way that I can mix in memory inserts for that period (maybe 
using the memory/head engine) directly with another MyISAM table for the 
rest of the experiment?

I'm looking for a way to buffer the data while still keeping the data 
available in real-time.

I have access to a RH9 8GB SMP system.
Using INSERT DELAYS seems to kill the server when it hits these data spikes.
I guess what I'm looking for is a way to insert 1,000,000 rows from a HEAP 
engine table to the beginning of a MyISAM table as a block instead of using 
something like this:

INSERT INTO hipjoint SELECT * FROM memhipjoint
Any help would be great! 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Newby to loading

2004-06-22 Thread Daniel Godefroy
You probably wrote '\GRP0617' instead of '/GRP0617'
At 07:58 AM 6/22/2004, you wrote:
I'm trying to load ACD call data into a table for the first time.  I've
saved my data file as a text file on my root directory (win xp c:\)

When I try to run:

LOAD DATA INFILE '/GRPO617'
INTO TABLE helpdeskgrouptotal;

I get the following error in mysql control center:

[Chris Stevenson] ERROR 1105: File '\GRPO617' not found (Errcode: 2)

Have a great day and thanks in advance for any assistance!

Chris Stevenson
Call Center Manager
The American Board of Family Practice


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.704 / Virus Database: 460 - Release Date: 6/12/2004
Daniel Godefroy
3445 Ridgewood, 109  73, Rue Lambert
Montreal Petion-ville
Quebec, Canada Haiti
Tel:(514) 382 5917  (509) 403 0873 / 257 3413
 BP 15324
url:www.sigasa.com/s_index.htm Voice over ip: 1 305 433 2762 / 1 514 
448 1077 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.704 / Virus Database: 460 - Release Date: 6/12/2004

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Views Syntax for MySQL 5

2004-06-22 Thread Andrew Pattison
Un-named views are supposed to be there already. I know this is not the
"real thing", I just thought I would mention it. ;-)

Cheers

Andrew.

- Original Message - 
From: "Josh Trutwin" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, June 22, 2004 5:53 PM
Subject: Re: Views Syntax for MySQL 5


> On Tue, 22 Jun 2004 16:34:44 +0100
> "Nic Skitt [Real Productions]" <[EMAIL PROTECTED]> wrote:
>
> > Hi All,
> >
> > I have noticed a lack of comments re Views.
> >
> > Is it 5.0 or 5.1 that we will have Views capability?
> >
> > I had hoped it was in the most up to date public development release
> > of 5. Unless I am getting the Views syntax wrong I assume its not
> > there.
> >
> > If it is not already packaged in V5.0 then will the syntax be the
> > standard SQL view syntax?
>
> The online manual is your friend:
>
> http://dev.mysql.com/doc/mysql/en/index.html
>
> First hit for searching on "views":
>
> http://dev.mysql.com/doc/mysql/en/ANSI_diff_Views.html
>
> Sounds like it will make it into the 5.0 branch, but has not yet.
>
> Another good link:
>
> http://dev.mysql.com/doc/mysql/en/Roadmap.html
>
> Josh
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



AES_DECRYPT (PHP Script)

2004-06-22 Thread Sapenov
Hi,

I have problem with fetching records to php script.

table structure :
---
id  int(10)
message text
--

i place information into message field using  following query 

INSERT INTO table (message) VALUES (AES_ENCRYPT('$message','password'));

it works fine, as i see in mysqlcc

However, when Ifire this query both in php script and mysqlcc:

SELECT AES_DECRYPT(message,'password'),message FROM CRM_tickets WHERE id='39' 

it outputs following information:

---
AES_DECRYPT(message,'password') |  message 
---
decrypted text|  encrypted text
|  encrypted text
|  encrypted text
|  encrypted text
|  encrypted text

--- 

*looks like function works off only once*


Regards,

Khazret Sapenov
Software Developer
CMS Inc
Toronto, Canada
  
 

Re: New to Dates - Plain English Please

2004-06-22 Thread Andrew Pattison
Another possible solution would be to store partial dates as, for example,
2004-05-01. In other words, simply make the partial date the same as the
first day of that month. This has the advantage that range date range
functions work, but you will not be able to tell the difference between a
partial date and a complete date.

Cheers

Andrew.

- Original Message - 
From: "Michael Stassen" <[EMAIL PROTECTED]>
To: "Jochem van Dieten" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, June 22, 2004 8:30 PM
Subject: Re: New to Dates - Plain English Please


>
> Jochem van Dieten wrote:
>
> > Michael Stassen wrote:
> >
> >> Jochem van Dieten wrote:
> >>
> >>> David Blomstrom wrote:
> >>>
> 
>  Suppose you have a list of dates in the proper format,
>  like this:
> 
>  2004-02-04
>  2003-11-02
> 
>  and you encounter a date with only the month and year,
>  like May 2002. How would you insert that
> >>>
> >>>
> >>> Not. Prompt the user for a full date.
> >>
> >>
> >> That is not strictly true.
> >
> >
> > It is what I would do, therefore it is a true answer to the question.
> >
> > Jochem
>
> Then I misunderstood you.  I took your answer to mean that you cannot
store
> dates without the day part, rather than that your advice was not to do so.
> I accept your explanation that you meant the latter, but I don't think
> that's clear from your original wording.
>
> Perhaps you read an accusatory tone in my choice of wording.  That was not
> my intent, and for that I apologize.
>
> Michael
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-22 Thread Sergio Salvi
Hi!

Can you give more details on the problematic inserts you're doing (table 
structure, indexes and insert command) ?

Also, do you believe your queries would benefit from MySQL's query cache? 
Maybe it's worth upgrading to version 4 and use this feature, even if 
you allocate just a small amount of memory for that.

[]s,
Sergio Salvi.

On Tue, 22 Jun 2004, Aram Mirzadeh wrote:

> 
> 
> We have an internal SNMP monitoring system that is monitoring about 
> 10,000 devices.  Each device is pinged then pulled for about an average 
> of 25-30 elements.  Each of the ping results and elements are then 
> stored in text file, then another system picks them up (NFS) and inserts 
> them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.
> 
> The database system is a Xeon 4 way, 12GB of ram with a striped raid 
> array dedicated to the database files and its indexes and such.
> 
> Every 5 minutes another process goes through the last set of inserts and 
> compares them for any threshold breaches, so the entire last set of data 
> is looked at.
> 
> We're falling behind on the inserts because the system can't seem to 
> handle the amount of inserts, the front end that generates the web pages 
> based on the previous records is dogging down.
> 
> I have read the regular optimizations papers and have done as much as I 
> felt safe, are there any huge database optimization papers?  Anything I 
> should be looking at?
> 
> Here is the relavent my.cnf entries:
> 
> set-variable= key_buffer=256M
> set-variable= max_allowed_packet=1M
> set-variable= table_cache=256
> set-variable= sort_buffer=1M
> set-variable= record_buffer=1M
> set-variable= myisam_sort_buffer_size=64M
> set-variable= thread_cache=8
> set-variable= thread_concurrency=8
> 
> [mysql]
> no-auto-rehash
> 
> [isamchk]
> set-variable= key_buffer=128M
> set-variable= sort_buffer=128M
> set-variable= read_buffer=2M
> set-variable= write_buffer=2M
> 
> [myisamchk]
> set-variable= key_buffer=128M
> set-variable= sort_buffer=128M
> set-variable= read_buffer=2M
> set-variable= write_buffer=2M
> 
> And here is my top output:
> 
> MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
>   Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
> 22/10/62/00
>   qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
> 50/00/00/00
>   Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k
> 
> Any suggestions would be greatly appreciated.
> 
> Aram
> 
> 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A Complicated Group Query

2004-06-22 Thread shaun thornburgh
Hi Shawn,
A slight correction(!) Client_ID is contained in the Projects table as a 
Client can have many projects. Therefore c.Project_ID will cause an error...

Here is a definition of the tables:
mysql> DESCRIBE Users;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra 
 |
+--+--+--+-+-++
| User_ID  | int(11)  |  | PRI | NULL| 
auto_increment |
| Client_ID| int(3)   | YES  | | NULL|   
 |
| User_Username| varchar(40)  |  | | |   
 |
| User_Password| varchar(20)  | YES  | | NULL|   
 |
| User_Name| varchar(100) |  | | |   
 |
| User_Type| varchar(20)  |  | | Nurse   |   
 |
| User_Email   | varchar(100) | YES  | | NULL|   
 |
| User_Location| varchar(40)  | YES  | | NULL|   
 |
+--+--+--+-+-++
15 rows in set (0.00 sec)

mysql> DESCRIBE Projects;
+--+--+--+-+-++
| Field| Type | Null | Key | Default | Extra  |
+--+--+--+-+-++
| Project_ID   | int(11)  |  | PRI | NULL| auto_increment |
| Project_Name | varchar(100) |  | | ||
| Client_ID| int(11)  |  | | 0   ||
+--+--+--+-+-++
4 rows in set (0.00 sec)
mysql> DESCRIBE Clients;
+-+--+--+-+-++
| Field   | Type | Null | Key | Default | Extra  |
+-+--+--+-+-++
| Client_ID   | int(11)  |  | PRI | NULL| auto_increment |
| Client_Name | varchar(100) | YES  | | NULL||
+-+--+--+-+-++
2 rows in set (0.00 sec)
mysql> DESCRIBE Bookings;
+-+-+--+-+-++
| Field   | Type| Null | Key | Default | 
Extra  |
+-+-+--+-+-++
| Booking_ID  | int(11) |  | PRI | NULL| 
auto_increment |
| Booking_Type| varchar(15) |  | | Unavailability  | 
   |
| User_ID | int(11) |  | | 0   | 
   |
| Project_ID  | int(11) | YES  | | NULL| 
   |
| Booking_Creator_ID  | int(11) | YES  | | NULL| 
   |
| Booking_Creation_Date   | datetime| YES  | | NULL| 
   |
| Booking_Start_Date  | datetime|  | | -00-00 00:00:00 | 
   |
| Booking_End_Date| datetime|  | | -00-00 00:00:00 | 
   |
| Booking_Completion_Date | date| YES  | | NULL| 
   |
| Booking_Mileage | int(5)  | YES  | | NULL| 
   |
| Booking_Status  | varchar(15) |  | | Other   | 
   |
| Additional_Notes| text| YES  | | NULL| 
   |
+-+-+--+-+-++
20 rows in set (0.01 sec)

mysql>
Thanks a million for your help here :)

From: [EMAIL PROTECTED]
To: "shaun thornburgh" <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED]
Subject: Re: A Complicated Group Query
Date: Tue, 22 Jun 2004 14:32:14 -0400
OOPS! That's because my second query should have been a LEFT JOIN and not a
RIGHT JOIN (I hate making silly cut-and-paste errors like that) 8-{
If I understand you correctly, you want to see a count of how many bookings
you have had per project broken down by user location. Try this:
SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY c.Project_ID, u.User_Location
Or for just one project, we can do this:
SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
  and c.Project_ID = 'x'
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY c.Project_ID, u.User_Location
---  or this 
SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID

RE: Using date ranges on a composite key degrades performance

2004-06-22 Thread SGreen


I worked up some additional variations on creating the same result set.
Would you mind giving them a try to see if any one of them works better
with your indexes?

FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
  AND factable.Date >= '2004-06-15'
  AND factable.Date < '2004-06-22'

- or -

FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
  AND factable.Date >= '2004-06-15'
WHERE factable.Date < '2004-06-22'

- or -

FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
WHERE factable.Date >= '2004-06-15'
  AND factable.Date < '2004-06-22'

- or -

FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
WHERE factable.Date BETWEEN '2004-06-15' and '2004-06-22'
  AND factable.Date < '2004-06-22'


- or -

FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
  AND factable.Date BETWEEN '2004-06-15' and '2004-06-22'
WHERE factable.Date < '2004-06-22'

- or -
FROM dimension
INNER JOIN factable
  ON dimension.id = factable.id
  AND factable.Date < '2004-06-22'
WHERE factable.Date BETWEEN '2004-06-15' and '2004-06-22'

(These last three use the BETWEEN clause (which is returns a closed set)
and opens one end by excluding the higher match. I am hoping that the
BETWEEN will pick up and use the second key in your index where the other
statements may not have.)

Sorry I couldn't offer any concrete advice.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  "David Perron"   

  <[EMAIL PROTECTED]To:   <[EMAIL PROTECTED]>  
 
  om>  cc:   <[EMAIL PROTECTED]>   

   Fax to: 

  06/22/2004 03:38 Subject:  RE: Using date ranges on a 
composite key degrades performance 
  PM   

   

   






Ive tried both ways.

The first way uses:

INNER JOIN dimension USING (id)
INNER JOIN fact_table USING (id)
WHERE factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

The second way:

dimension STRAIGHT_JOIN fact_table
WHERE factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 22, 2004 3:16 PM
To: David Perron
Cc: [EMAIL PROTECTED]
Subject: Re: Using date ranges on a composite key degrades performance


Are you using the JOIN ... ON... syntax or the "comma" syntax of joining
tables?





  "David Perron"

  <[EMAIL PROTECTED]To:
<[EMAIL PROTECTED]>
  om>  cc:

   Fax to:

  06/22/2004 02:46 Subject:  Using date ranges
on a composite key degrades performance
  PM










Im finding that when I execute a query joining to a large fact table (~450
million rows) using a composite key of INT(11) & DATE for a single Date
i.e.

AND dimension.id = factable.id
AND factable.Date = '2004-06-22'

the query returns exceptionally fast.  However, when I change this to a
range, i.e.

AND dimension.id = factable.id
AND factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

the performance degrades disproportionately to the number of single day
queries it would take to get the results.

Since the main part of the SELECT statement is an aggregate, the first
query
does me no good.

Does anyone have a similar experience, and/or suggestion for improving the
performance of this?






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Using date ranges on a composite key degrades performance

2004-06-22 Thread David Perron
 
Ive tried both ways.

The first way uses:

INNER JOIN dimension USING (id)
INNER JOIN fact_table USING (id)
WHERE factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

The second way:

dimension STRAIGHT_JOIN fact_table
WHERE factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 22, 2004 3:16 PM
To: David Perron
Cc: [EMAIL PROTECTED]
Subject: Re: Using date ranges on a composite key degrades performance


Are you using the JOIN ... ON... syntax or the "comma" syntax of joining
tables?



 

  "David Perron"

  <[EMAIL PROTECTED]To:
<[EMAIL PROTECTED]>   
  om>  cc:

   Fax to:

  06/22/2004 02:46 Subject:  Using date ranges
on a composite key degrades performance 
  PM

 

 






Im finding that when I execute a query joining to a large fact table (~450
million rows) using a composite key of INT(11) & DATE for a single Date i.e.

AND dimension.id = factable.id
AND factable.Date = '2004-06-22'

the query returns exceptionally fast.  However, when I change this to a
range, i.e.

AND dimension.id = factable.id
AND factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

the performance degrades disproportionately to the number of single day
queries it would take to get the results.

Since the main part of the SELECT statement is an aggregate, the first query
does me no good.

Does anyone have a similar experience, and/or suggestion for improving the
performance of this?






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql crash when opening a connection

2004-06-22 Thread D D
Hi,

I have installed mysql 4.0.18 on my debian box. I am
quite new to linux. mysql works fine with the command
line (mysql) and with apache/phpmyadmin too. But when
i try to connect from Windows or if i telnet from
Debian, i got:

Number of processes running now: 0
040622 21:27:33  mysqld restarted
Warning: Ignoring user change to 'mysql' because the
user was set to 'mysql' earlier on the command line
/usr/sbin/mysqld: ready for connections.
Version: '4.0.18-log'  socket:
'/var/run/mysqld/mysqld.sock'  port: 3306
mysqld got signal 11;
This could be because you hit a bug. It is also
possible that this binary
or one of the libraries it was linked against is
corrupt, improperly built,
or misconfigured. This error can also be caused by
malfunctioning hardware.
We will try our best to scrape up some info that will
hopefully help diagnose
the problem, but since we have already crashed,
something is definitely wrong
and this may fail.

key_buffer_size=16777216
read_buffer_size=131072
max_used_connections=0
max_connections=100
threads_connected=0
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size +
sort_buffer_size)*max_connections = 233983 K
bytes of memory
Hope that's ok; if not, decrease some variables in the
equation.

thd=(nil)
Attempting backtrace. You can use the following
information to find out
where mysqld died. If you see no messages after this,
something went
terribly wrong...
Cannot determine thread, fp=0xbfffc4d8, backtrace may
not be correct.
Stack range sanity check OK, backtrace follows:
0x810f50b
0x40176825
0x40022e15
0x400221a2
0x400213ee
0x40021297
0x88d
0x811059d
0x40292dc6
0x80c12f1
New value of fp=(nil) failed sanity check, terminating
stack trace!
Please read
http://www.mysql.com/doc/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace.
Resolved
stack trace is much more helpful in diagnosing the
problem, so please do 
resolve it
The manual page at
http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is
causing the crash.

Number of processes running now: 0
040622 21:27:59  mysqld restarted
Warning: Ignoring user change to 'mysql' because the
user was set to 'mysql' earlier on the command line
/usr/sbin/mysqld: ready for connections.
Version: '4.0.18-log'  socket:
'/var/run/mysqld/mysqld.sock'  port: 3306







Créez gratuitement votre Yahoo! Mail avec 100 Mo de stockage !
Créez votre Yahoo! Mail sur http://fr.benefits.yahoo.com/

Dialoguez en direct avec vos amis grâce à Yahoo! Messenger !Téléchargez Yahoo! 
Messenger sur http://fr.messenger.yahoo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: R: connectorJ & huge table problem

2004-06-22 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Leonardo Francalanci wrote:

>> What happens if you use a nightly snapshot of Connector/J from
>> http://downloads.mysql.com/snapshots.php?
>
> I tried, but I still have the same problem:
>
> java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
>   at
>
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
> nt.java:924)
>   at
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
>
> I have no idea...
>
> I tried adding a smaller table (10 rows) and executing the same
> queries against this table: it doesn't work, while on another
> server (another machine) it works!
> It worked until I did a lot of inserts on one table and created
> splitted-version tables of the big one.
>
> Hence I'm not sure the problem is on the client...

Leonardo,

Earlier you said that a statement without a parameter ('?') works, but
one with a parameter doesn't.

In both cases, are you still using a PreparedStatement?

-Mark


- --
Mr. Mark Matthews
MySQL AB, Software Development Manager, J2EE and Windows Platforms
Office: +1 708 332 0507
www.mysql.com

MySQL Guide to Lower TCO
http://www.mysql.com/it-resources/white-papers/tco.php
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFA2Io7tvXNTca6JD8RAiwRAJ9prt9UcID34Sz8/GmfF271y87IRQCfcaKO
NTfj5s43qUDRjV27RiHOZzc=
=6tVu
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: New to Dates - Plain English Please

2004-06-22 Thread Michael Stassen
Jochem van Dieten wrote:
Michael Stassen wrote:
Jochem van Dieten wrote:
David Blomstrom wrote:
Suppose you have a list of dates in the proper format,
like this:
2004-02-04
2003-11-02
and you encounter a date with only the month and year,
like May 2002. How would you insert that

Not. Prompt the user for a full date.

That is not strictly true.

It is what I would do, therefore it is a true answer to the question.
Jochem
Then I misunderstood you.  I took your answer to mean that you cannot store 
dates without the day part, rather than that your advice was not to do so. 
I accept your explanation that you meant the latter, but I don't think 
that's clear from your original wording.

Perhaps you read an accusatory tone in my choice of wording.  That was not 
my intent, and for that I apologize.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Using date ranges on a composite key degrades performance

2004-06-22 Thread SGreen

Are you using the JOIN ... ON... syntax or the "comma" syntax of joining
tables?



   

  "David Perron"   

  <[EMAIL PROTECTED]To:   <[EMAIL PROTECTED]>  
 
  om>  cc: 

   Fax to: 

  06/22/2004 02:46 Subject:  Using date ranges on a 
composite key degrades performance 
  PM   

   

   






Im finding that when I execute a query joining to a large fact table (~450
million rows) using a composite key of INT(11) & DATE for a single Date
i.e.

AND dimension.id = factable.id
AND factable.Date = '2004-06-22'

the query returns exceptionally fast.  However, when I change this to a
range, i.e.

AND dimension.id = factable.id
AND factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

the performance degrades disproportionately to the number of single day
queries it would take to get the results.

Since the main part of the SELECT statement is an aggregate, the first
query
does me no good.

Does anyone have a similar experience, and/or suggestion for improving the
performance of this?






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: New to Dates - Plain English Please

2004-06-22 Thread Jochem van Dieten
Michael Stassen wrote:
Jochem van Dieten wrote:
David Blomstrom wrote:
Suppose you have a list of dates in the proper format,
like this:
2004-02-04
2003-11-02
and you encounter a date with only the month and year,
like May 2002. How would you insert that
Not. Prompt the user for a full date.
That is not strictly true.
It is what I would do, therefore it is a true answer to the question.
Jochem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Using date ranges on a composite key degrades performance

2004-06-22 Thread David Perron

Im finding that when I execute a query joining to a large fact table (~450
million rows) using a composite key of INT(11) & DATE for a single Date i.e.

AND dimension.id = factable.id
AND factable.Date = '2004-06-22'

the query returns exceptionally fast.  However, when I change this to a
range, i.e.

AND dimension.id = factable.id
AND factable.Date >= '2004-06-15'
AND factable.Date < '2004-06-22'

the performance degrades disproportionately to the number of single day
queries it would take to get the results.

Since the main part of the SELECT statement is an aggregate, the first query
does me no good.

Does anyone have a similar experience, and/or suggestion for improving the
performance of this?






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A Complicated Group Query

2004-06-22 Thread SGreen

OOPS! That's because my second query should have been a LEFT JOIN and not a
RIGHT JOIN (I hate making silly cut-and-paste errors like that) 8-{

If I understand you correctly, you want to see a count of how many bookings
you have had per project broken down by user location. Try this:

SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY c.Project_ID, u.User_Location

Or for just one project, we can do this:

SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
  and c.Project_ID = 'x'
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY c.Project_ID, u.User_Location

---  or this 

SELECT c.Project_ID, u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
WHERE c.Project_ID = 'x'
GROUP BY c.Project_ID, u.User_Location

--- or even this ---

SELECT u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
INNER JOIN Users u
  on u.Client_ID = c.Client_ID
  and c.Project_ID = 'x'
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY u.User_Location

If you have any NULL "user_location"s that's because you have at least one
Client that is part of a project but that Client has no Users. In my last
sample I changed the first LEFT JOIN to an INNER JOIN so that I would only
retrieve those Clients (that are part of project 'x') that actually have
Users. I could have also eliminated them on the way out of the query (and
after the group by) like this

SELECT u.User_Location, COUNT(b.booking_ID) as Bookings
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
  and c.Project_ID = 'x'
LEFT JOIN Bookings b
  on b.User_ID = u.User_ID
GROUP BY u.User_Location
HAVING u.User_ID IS NOT NULL

As you can see, there are many ways to build the report you want and you
should use the one that performs best for you.

There _is_  a  difference in putting c.Project_ID = 'x' in the ON clause
and in the WHERE clause. In the ON clause those rows are eliminated BEFORE
the temporary internal table is built. In the WHERE clause you are
filtering on all of the rows. I am going to make up some round numbers to
illustrate. Say you have 50 Clients where 5 of them are part of project
'x'. You have 100 Users and 500 Bookings.
Every Client has at least 1 user and every User has at least 1 Booking
(just for argument's sake). This query:

SELECT *
FROM Clients, Users, Bookings

(or , alternatively)

SELECT *
FROM Clients
INNER JOIN Users
INNER JOIN Bookings

will contain all of the columns of each the three tables and will return
(50x100x500) or 250 rows. This is a "cartesian product" of the three
tables. The ON clauses of the JOIN statements keep that to a reasonable
number. Lets say that this is a perfectly balanced world and each User has
had 5 bookings and that each Client has 2 users then this query

SELECT *
FROM Clients c
INNER JOIN Users u
  ON u.Client_ID = c.Client_ID
INNER JOIN Bookings b
  ON b.User_ID = u.UserID

will result in only (50x2x5) or 500 rows. It's much easier for the WHERE
clause to scan the resulting columns (assuming no index exists) in a result
set this size than one containing 2.5 million rows.

Adding another restriction to one of the ON clauses helps us out even more

SELECT *
FROM Clients c
INNER JOIN Users u
  ON u.Client_ID = c.Client_ID
  AND c.Project_ID='x'
INNER JOIN Bookings b
  ON b.User_ID = u.UserID

Results in only (5x2x5) or 50 rows. If you had any additional restrictions
(like in a WHERE clause) you only have to apply those to a dataset of 50
rows. What if we had performed a GROUP BY?  In the first case we would have
had to GROUP 2.5 million rows, in the second 500 rows and in the last only
50.

You cannot always put all of your restrictions into the ON clause of a
JOIN. Let's say you wanted to know what Clients didn't have any users
assigned to them. You need this query:

SELECT c.Client_Name
FROM Clients c
LEFT JOIN Users u
  on u.Client_ID = c.Client_ID
WHERE u.Client_ID IS NULL

In this case you would need the _results_ of the JOIN in order to see which
rows didn't match up. You can't put the IS NOT NULL into the ON clause as
none of your Client's User_IDs are actually null. It's the internal results
of the LEFT JOIN that will have NULL values for all of your Users columns
for those rows where the IDs didn't match.

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   
 
  "shaun thornburgh"   
 
  <[EMAIL PROTECTED]

RE: New to Dates - Plain English Please

2004-06-22 Thread Paul DuBois
At 9:03 -0700 6/22/04, David Blomstrom wrote:
One more question...
Suppose you have a list of dates in the proper format,
like this:
2004-02-04
2003-11-02
and you encounter a date with only the month and year,
like May 2002. How would you insert that, something
like this?:
2004-02-04
2003-11-02
2002-05---
The manual can help you here. See:
http://dev.mysql.com/doc/mysql/en/Date_and_time_types.html
Second paragraph.
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: New to Dates - Plain English Please

2004-06-22 Thread Paul DuBois
At 23:40 -0700 6/21/04, David Blomstrom wrote:
I  haven't worked with dates yet and wondered if
someone could give me an overview in plain English.
At the moment, I'm working on a table with a column of
dates in this format:
March 2, 2003
July 7, 2004
If I understand the Manual, the correct format for
these dates in a MySQL table would be like this:
03-02-2003
07-07-2004
Am I right?
No.  Which part of the manual leads you to believe that this
is the correct format for dates?
Dates should be entered in -MM-DD format, not MM-DD-.
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: New to Dates - Plain English Please

2004-06-22 Thread Michael Stassen
Jochem van Dieten wrote:
David Blomstrom wrote:
Suppose you have a list of dates in the proper format,
like this:
2004-02-04
2003-11-02
and you encounter a date with only the month and year,
like May 2002. How would you insert that

Not. Prompt the user for a full date.
Jochem
That is not strictly true.  You can use 00 if there is not a day, that is, 
insert May 2002 as '2002-05-00'.  This can be useful for historical data 
where you know the month but not the day.  If you do this, however, you must 
account for it in any code which uses the date.  That is, as soon as you 
allow this, you cannot assume any date has a valid day part.

MySQL will also let you insert a date of '2002-00-00'.  Again, this might 
prove useful for data where sometimes only the year is known, but you would 
have to code with the assumption that dates might be just years.

Note that mysql does not consider dates with 00s to be valid dates, which 
affects some date functions but not others.  For example, you can pull out 
the day, month, and year parts of partial dates

  SELECT YEAR('2002-05-00'), MONTH('2002-05-00'), DAYOFMONTH('2002-05-00');
  ++-+--+
  | YEAR('2002-05-00') | MONTH('2002-05-00') | DAYOFMONTH('2002-05-00') |
  ++-+--+
  |   2002 |   5 |0 |
  ++-+--+
  1 row in set (0.00 sec)
  SELECT YEAR('2002-00-00'), MONTH('2002-00-00'), DAYOFMONTH('2002-00-00');
  ++-+--+
  | YEAR('2002-00-00') | MONTH('2002-00-00') | DAYOFMONTH('2002-00-00') |
  ++-+--+
  |   2002 |   0 |0 |
  ++-+--+
  1 row in set (0.00 sec)
but you can't add/subtract an interval to/from a partial date
  SELECT '2002-05-04' + INTERVAL 1 DAY AS full_date,
 '2002-05-00' + INTERVAL 1 DAY AS no_day,
 '2002-00-00' + INTERVAL 1 DAY AS no_month;
  +++--+
  | full_date  | no_day | no_month |
  +++--+
  | 2002-05-05 | NULL   | NULL |
  +++--+
  1 row in set (0.00 sec)
One should consider carefully whether allowing partial dates will be worth 
the hassle.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Performance issues

2004-06-22 Thread Aram Mirzadeh

We have an internal SNMP monitoring system that is monitoring about 
10,000 devices.  Each device is pinged then pulled for about an average 
of 25-30 elements.  Each of the ping results and elements are then 
stored in text file, then another system picks them up (NFS) and inserts 
them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.

The database system is a Xeon 4 way, 12GB of ram with a striped raid 
array dedicated to the database files and its indexes and such.

Every 5 minutes another process goes through the last set of inserts and 
compares them for any threshold breaches, so the entire last set of data 
is looked at.

We're falling behind on the inserts because the system can't seem to 
handle the amount of inserts, the front end that generates the web pages 
based on the previous records is dogging down.

I have read the regular optimizations papers and have done as much as I 
felt safe, are there any huge database optimization papers?  Anything I 
should be looking at?

Here is the relavent my.cnf entries:
set-variable= key_buffer=256M
set-variable= max_allowed_packet=1M
set-variable= table_cache=256
set-variable= sort_buffer=1M
set-variable= record_buffer=1M
set-variable= myisam_sort_buffer_size=64M
set-variable= thread_cache=8
set-variable= thread_concurrency=8
[mysql]
no-auto-rehash
[isamchk]
set-variable= key_buffer=128M
set-variable= sort_buffer=128M
set-variable= read_buffer=2M
set-variable= write_buffer=2M
[myisamchk]
set-variable= key_buffer=128M
set-variable= sort_buffer=128M
set-variable= read_buffer=2M
set-variable= write_buffer=2M
And here is my top output:
MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
 Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
22/10/62/00
 qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
50/00/00/00
 Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k

Any suggestions would be greatly appreciated.
Aram
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql control center documentation

2004-06-22 Thread Michael Stassen
Eric Bergen wrote:
mysql.com (php.net and freshmeat.net) sites all have a search features
where if a requested file is not found it performs a search for it
as a key word.
Interesting.  I didn't know that.  That's nice, but it's hardly intuitive. 
I expect that's designed to help with broken links, rather than to be the 
expected way to search.  After all, there is a search box at the top.  A 
good interface, however, would make it clear where to go, rather than 
requiring you to guess a URL or search for something as basic as product 
documentation.

Example finding the mysql control center manual is as easy as requesting
http://www.mysql.com/mysql control center 

or depending on your browser
http://www.mysql.com/mysql%20control%20center
will perform a search of mysql.com 'mysql control center'
The search reveals installation instructions 
 and a section in the 
mysql manual  (turns out my 
original suggestion wasn't useless, after all).  The latter is not linked 
anywhere on the mysqlcc page !

-Eric
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Views Syntax for MySQL 5

2004-06-22 Thread Josh Trutwin
On Tue, 22 Jun 2004 16:34:44 +0100
"Nic Skitt [Real Productions]" <[EMAIL PROTECTED]> wrote:

> Hi All,
>  
> I have noticed a lack of comments re Views.
>  
> Is it 5.0 or 5.1 that we will have Views capability?
>  
> I had hoped it was in the most up to date public development release
> of 5. Unless I am getting the Views syntax wrong I assume its not
> there.
>  
> If it is not already packaged in V5.0 then will the syntax be the
> standard SQL view syntax?

The online manual is your friend:

http://dev.mysql.com/doc/mysql/en/index.html

First hit for searching on "views":

http://dev.mysql.com/doc/mysql/en/ANSI_diff_Views.html

Sounds like it will make it into the 5.0 branch, but has not yet.

Another good link:

http://dev.mysql.com/doc/mysql/en/Roadmap.html

Josh
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-22 Thread J S
Shawn,
Here are the results:
$ ls -l /sawmill/rawlog/SG_CSGL02_main_47060821.log.gz
-rw-r-   1 bluecoat staff138510199 Jun 14 10:04 
/sawmill/rawlog/SG_CSGL02_main_47060821.log.gz

$ time ./logfile.pl /sawmill/rawlog/SG_CSGL02_main_47060821.log.gz
Tue Jun 22 16:53:40 2004: PARSING 
/sawmill/rawlog/SG_CSGL02_main_47060821.log.gz...
Tue Jun 22 17:03:37 2004: BULK_TABLE_INSERT...
Tue Jun 22 17:05:41 2004: INSERT_URL_SCHEMES...
Tue Jun 22 17:06:08 2004: INSERT_URL_SERVERS...
Tue Jun 22 17:08:06 2004: INSERT_URL_PATHS...
Tue Jun 22 17:11:20 2004: INSERT_URL_VISITS...
Tue Jun 22 17:28:16 2004: INSERT_INTERNET_USAGE...
Tue Jun 22 17:42:01 2004: Finished

real47m16.68s
user10m44.47s
sys 0m17.95s
$
# ls -l
total 1206168
-rw-rw   1 mysqlmysql379692348 Jun 22 17:05 bulk_table.MYD
-rw-rw   1 mysqlmysql   1024 Jun 22 17:05 bulk_table.MYI
-rw-rw   1 mysqlmysql   8760 Jun 22 16:51 bulk_table.frm
-rw-rw   1 mysqlmysql115443608 Jun 22 17:40 internet_usage.MYD
-rw-rw   1 mysqlmysql19328000 Jun 22 17:42 internet_usage.MYI
-rw-rw   1 mysqlmysql   8646 Jun 22 16:51 internet_usage.frm
-rw-rw   1 mysqlmysql33437600 Jun 22 17:11 url_paths.MYD
-rw-rw   1 mysqlmysql27696128 Jun 22 17:11 url_paths.MYI
-rw-rw   1 mysqlmysql   8574 Jun 22 16:51 url_paths.frm
-rw-rw   1 mysqlmysql 80 Jun 22 17:06 url_schemes.MYD
-rw-rw   1 mysqlmysql   3072 Jun 22 17:06 url_schemes.MYI
-rw-rw   1 mysqlmysql   8578 Jun 22 16:51 url_schemes.frm
-rw-rw   1 mysqlmysql 646160 Jun 22 17:08 url_servers.MYD
-rw-rw   1 mysqlmysql 796672 Jun 22 17:08 url_servers.MYI
-rw-rw   1 mysqlmysql   8578 Jun 22 16:51 url_servers.frm
-rw-rw   1 mysqlmysql24082472 Jun 22 17:28 url_visit.MYD
-rw-rw   1 mysqlmysql16331776 Jun 22 17:28 url_visit.MYI
-rw-rw   1 mysqlmysql   8736 Jun 22 16:51 url_visit.frm
# du -sk .
603088  .
I had a problem trying to get the hits results. It works through the mysql 
command line but if I try to run mysql < hits.sql  > hits.txt it can't find 
the temp table tmpStats.

js.
Great catch! I believe you found the problem. Sorry about that!   ;-D
So... just curious... from the original 60GB of text data, how much space
did the final data require (including indices)?
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

  "J S"
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  com> cc:   
[EMAIL PROTECTED]
   Fax to:
  06/22/2004 07:55 Subject:  Re: load data 
into 2 tables and set id
  AM



I think I fixed it!
INSERT IGNORE url_visit (url_server_ID, url_path_ID, querystring,
category)
SELECT DISTINCT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
   ON us.server = bt.server
INNER JOIN url_paths up
   on up.path=bt.path
After this I ran the sql to insert into internet_usage and this time it
finished after 10m32.
I'll do some more testing and let you know how it goes.
Thanks,
js.
>
>Did you mean there to be duplicates in the url_visits? Do I need to use
>IGNORE in the following SQL?
>
>INSERT url_visit (url_server_ID, url_path_ID, querystring,
>category)
>SELECT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
>NULL),
>bt.category
>FROM bulk_table bt
>INNER JOIN url_servers us
>   ON us.server = bt.server
>INNER JOIN url_paths up
>   on up.path=bt.path
>
>js.
>
>mysql> select * from url_visit where urlid=1631;
>+---+---+-++--+
>| urlid | url_server_ID | url_path_ID | query
>
> | category |
>+---+---+-++--+
>|  1631 | 21720 |  630695 |
>cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift
>| Online Sales |
>+---+---+-++--+
>1 row in set (0.01 sec)
>
>mysql> select * from url_visit where urlid=1753;
>+---+---+-++--+
>| urlid | url_server_ID | url_path_ID | query
>
> | category |
>+---+---+-+---

Re: mysql control center documentation

2004-06-22 Thread Eric Bergen
mysql.com (php.net and freshmeat.net) sites all have a search features
where a if a requested file is not found it performs a search for it
as a key word.

Example finding the mysql control center manual is as easy as requesting
http://www.mysql.com/mysql control center 

or depending on your browser
http://www.mysql.com/mysql%20control%20center

will perform a search of mysql.com 'mysql control center'

-Eric

On Tue, 22 Jun 2004 12:26:39 -0400, Bartis, Robert M (Bob)
<[EMAIL PROTECTED]> wrote:
> 
> Had the same issue. I've bookmarked it, but its not clear why its so hard to find. 
> Its good stuff man put it out front:-)
> 
> -Original Message-
> From: Michael Stassen [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, June 22, 2004 12:13 PM
> To: Chris Stevenson
> Cc: [EMAIL PROTECTED]
> Subject: Re: mysql control center documentation
> 
> The MySQL web site was recently reorganized.  For some reason, when they did
> so, the buried the manual 4 clicks away in the "Developer Zone".  It's not
> clear to me how anyone new is supposed to find it there.  In any case, the
> URL is .
> 
> Michael
> 
> Chris Stevenson wrote:
> > Is there a user guide available anywhere?  I can't seem to find anything
> > on mysql.com
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newby to loading

2004-06-22 Thread gerald_clark

Victor Pendleton wrote:
load data infile '' into table helpdeskgrouptotal; 

-Original Message-
From: Chris Stevenson
To: [EMAIL PROTECTED]
Sent: 6/22/04 9:58 AM
Subject: Newby to loading
I'm trying to load ACD call data into a table for the first time.  I've
saved my data file as a text file on my root directory (win xp c:\)

When I try to run:

LOAD DATA INFILE '/GRPO617'
 

LOAD DATA INFILE 'C:/GRPO617'

INTO TABLE helpdeskgrouptotal;

I get the following error in mysql control center:

[Chris Stevenson] ERROR 1105: File '\GRPO617' not found (Errcode: 2)

Have a great day and thanks in advance for any assistance!

Chris Stevenson
Call Center Manager
The American Board of Family Practice

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: mysql control center documentation

2004-06-22 Thread Bartis, Robert M (Bob)
Had the same issue. I've bookmarked it, but its not clear why its so hard to find. Its 
good stuff man put it out front:-)

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 22, 2004 12:13 PM
To: Chris Stevenson
Cc: [EMAIL PROTECTED]
Subject: Re: mysql control center documentation


The MySQL web site was recently reorganized.  For some reason, when they did 
so, the buried the manual 4 clicks away in the "Developer Zone".  It's not 
clear to me how anyone new is supposed to find it there.  In any case, the 
URL is .

Michael

Chris Stevenson wrote:
> Is there a user guide available anywhere?  I can't seem to find anything
> on mysql.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql control center documentation

2004-06-22 Thread Michael Stassen
Somehow I didn't notice "mysql control center" in the Subject until the 
exact moment I hit "Send".  Sorry about that.  Please ignore my unhelpful reply.

I seem to remember that your question has been asked before, and that the 
answer was that it's pretty self-explanatory.  You might check the list 
archives .

Michael
Michael Stassen wrote:
The MySQL web site was recently reorganized.  For some reason, when they 
did so, the buried the manual 4 clicks away in the "Developer Zone".  
It's not clear to me how anyone new is supposed to find it there.  In 
any case, the URL is .

Michael
Chris Stevenson wrote:
Is there a user guide available anywhere?  I can't seem to find anything
on mysql.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: New to Dates - Plain English Please

2004-06-22 Thread Jochem van Dieten
David Blomstrom wrote:
Suppose you have a list of dates in the proper format,
like this:
2004-02-04
2003-11-02
and you encounter a date with only the month and year,
like May 2002. How would you insert that
Not. Prompt the user for a full date.
Jochem
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql control center documentation

2004-06-22 Thread Michael Stassen
The MySQL web site was recently reorganized.  For some reason, when they did 
so, the buried the manual 4 clicks away in the "Developer Zone".  It's not 
clear to me how anyone new is supposed to find it there.  In any case, the 
URL is .

Michael
Chris Stevenson wrote:
Is there a user guide available anywhere?  I can't seem to find anything
on mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: A Complicated Group Query

2004-06-22 Thread shaun thornburgh
Hi Shawn,
Thanks for your reply, I am still having problems here though!
The first query produces this:
+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|1818 | NULL  |
|   1 | 204   |
|   1 | 301   |
|   1 | 302   |
|   1 | 408   |
+-+---+
The location counts here are correct except for the NULL, but the other 
locations havent been included.

The second query, produces a complete list of all the Users locations!
There is another table that might help here; Clients. Each User is 
referenced to a client and each project is referenced to a client. So a 
client has many projects and many users in the database. If I
modify my initial query to:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID WHERE U.Client_ID = 'X' GROUP 
BY(U.User_Location);

It works perefectly for all clients that have one project. But I need to be 
able to produce the same report for individual projects rather than clients!

I hope you can help me here, many thanks!
From: [EMAIL PROTECTED]
To: "shaun thornburgh" <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED]
Subject: Re: A Complicated Group Query
Date: Tue, 22 Jun 2004 10:12:22 -0400
Hi Shaun,
You said:
>SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
>Bookings B ON U.User_ID = B.User_ID AND B.Project_ID = '10' GROUP
>BY(U.User_Location);
>
>It returns all User Locations in the User table, regardless of the
>Project_ID of the Booking.
That's exactly how the LEFT JOIN is supposed to work. The users are on the
LEFT side of the LEFT JOIN so you will get _all_ rows from that table and
only those rows from the table on the *other* side of the join that satisfy
your ON conditions.
You can flip your query two ways. Change either LEFT JOIN to RIGHT JOIN or
exchange the positions of the table names around the JOIN keywords
SELECT COUNT(B.Booking_ID), U.User_Location
FROM Users U
RIGHT OUTER JOIN Bookings B
  ON U.User_ID = B.User_ID
  AND B.Project_ID = '10'
GROUP BY(U.User_Location);
-OR-
SELECT COUNT(B.Booking_ID), U.User_Location
FROM Bookings B
RIGHT OUTER JOIN Users U
  ON U.User_ID = B.User_ID
  AND B.Project_ID = '10'
GROUP BY(U.User_Location);
Either one of these statements puts the Bookings table into the
"controlling" side of the partial join.
Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

  "shaun thornburgh"
  <[EMAIL PROTECTED]To:   
[EMAIL PROTECTED]
  otmail.com>   cc:
Fax to:
  06/22/2004 09:41  Subject:  A Complicated 
Group Query
  AM



Hi,
I have three tables (among others!) in my database; Bookings, Users and
Projects. Users have location codes and can make bookings which are for a
particular project.
Using the following query I can get a list of all Locations (Each user has
a
Location code) and the number of Bookings that have taken place for that
location.
SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID GROUP BY(U.User_Location);
This is useful as it shows zero values where no bookings have taken place.
However, Each booking is related to a particular project by Project_ID. How
can I modify this query such that it only returns a count of bookings where
the Project_ID = 'x'? If I modify this query to:
SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID AND B.Project_ID = '10' GROUP
BY(U.User_Location);
It returns all User Locations in the User table, regardless of the
Project_ID of the Booking. And if I modify the query to:
SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID WHERE B.Project_ID = '10' GROUP
BY(U.User_Location);
I lose the zero/NULL Values...
Any comments here would be greatly apprecited!
_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: R: R: connectorJ & huge table problem

2004-06-22 Thread Fagyal, Csongor
Richard F. Rebel wrote:
Next time you come over, maybe you can help hold him or take the shot. 
He's such a pain to take pictures of, he walks right up to the camera.
 

Yeah, I also have problems taking snapshots of my cat...  use FEL to 
pre-flash so he won't blink when you take the actual picture.

:-)))
- Cs.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: New to Dates - Plain English Please

2004-06-22 Thread David Blomstrom
One more question...

Suppose you have a list of dates in the proper format,
like this:

2004-02-04
2003-11-02

and you encounter a date with only the month and year,
like May 2002. How would you insert that, something
like this?:

2004-02-04
2003-11-02
2002-05---




__
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql control center documentation

2004-06-22 Thread Chris Stevenson
Is there a user guide available anywhere?  I can't seem to find anything
on mysql.com

 



Views Syntax for MySQL 5

2004-06-22 Thread Nic Skitt [Real Productions]
Hi All,
 
I have noticed a lack of comments re Views.
 
Is it 5.0 or 5.1 that we will have Views capability?
 
I had hoped it was in the most up to date public development release of 5.
Unless I am getting the Views syntax wrong I assume its not there.
 
If it is not already packaged in V5.0 then will the syntax be the standard
SQL view syntax?
 
Cheers
 
Kind Regards
 
Nic Skitt
Real Productions UK Limited
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.657 / Virus Database: 422 - Release Date: 13/04/2004
 


Re: R: R: R: connectorJ & huge table problem

2004-06-22 Thread SGreen

I think a mistake. I did not understand it either


   

  "Leonardo

  Francalanci" To:   "Richard F. Rebel" <[EMAIL 
PROTECTED]> 
  <[EMAIL PROTECTED]cc:   "''Mysql ' '" <[EMAIL 
PROTECTED]> 
  tel.ie>  Fax to: 

   Subject:  R: R: R: connectorJ & huge 
table problem  
  06/22/2004 10:18 

  AM   

   

   





> Next time you come over, maybe you can help hold him or take the shot.
> He's such a pain to take pictures of, he walks right up to the camera.

Mmmh... my English is very bad... I can't understand...
is it a joke or just a mistake?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: R: R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> Next time you come over, maybe you can help hold him or take the shot. 
> He's such a pain to take pictures of, he walks right up to the camera.

Mmmh... my English is very bad... I can't understand...
is it a joke or just a mistake?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Newby to loading

2004-06-22 Thread Victor Pendleton
load data infile '' into table helpdeskgrouptotal; 

-Original Message-
From: Chris Stevenson
To: [EMAIL PROTECTED]
Sent: 6/22/04 9:58 AM
Subject: Newby to loading

I'm trying to load ACD call data into a table for the first time.  I've
saved my data file as a text file on my root directory (win xp c:\)

 

When I try to run:

 

LOAD DATA INFILE '/GRPO617'

INTO TABLE helpdeskgrouptotal;

 

I get the following error in mysql control center:

 

[Chris Stevenson] ERROR 1105: File '\GRPO617' not found (Errcode: 2)

 

Have a great day and thanks in advance for any assistance!

 

Chris Stevenson

Call Center Manager

The American Board of Family Practice

 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: R: R: connectorJ & huge table problem

2004-06-22 Thread Richard F. Rebel

Next time you come over, maybe you can help hold him or take the shot. 
He's such a pain to take pictures of, he walks right up to the camera.

On Tue, 2004-06-22 at 09:50, Leonardo Francalanci wrote:
> > It looks as if you are being over-complex anyway. Isn't what you want
> > 
> > SELECT * FROM PARTITIONED_B ORDER BY RAND() LIMIT 1 ;
> > 
> > which picks a single random record.
> 
> 
> No, I have to know which is the ID I pick before doing the select
> (because I splitted my data, and when I access the splitted table
> I have code like:
> 
> int table = ((m / 200) + 1);
> String query = "SELECT * FROM PARTITIONED_B_" + table + " WHERE ID=?";
> )
> 
> 
> Thank you anyway
-- 
Richard F. Rebel
[EMAIL PROTECTED]
t. 212.239.


signature.asc
Description: This is a digitally signed message part


Newby to loading

2004-06-22 Thread Chris Stevenson
I'm trying to load ACD call data into a table for the first time.  I've
saved my data file as a text file on my root directory (win xp c:\)

 

When I try to run:

 

LOAD DATA INFILE '/GRPO617'

INTO TABLE helpdeskgrouptotal;

 

I get the following error in mysql control center:

 

[Chris Stevenson] ERROR 1105: File '\GRPO617' not found (Errcode: 2)

 

Have a great day and thanks in advance for any assistance!

 

Chris Stevenson

Call Center Manager

The American Board of Family Practice

 



R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> What happens if you use a nightly snapshot of Connector/J from
> http://downloads.mysql.com/snapshots.php?

I tried, but I still have the same problem:

java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
nt.java:924)
at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)

I have no idea...

I tried adding a smaller table (10 rows) and executing the same
queries against this table: it doesn't work, while on another
server (another machine) it works!
It worked until I did a lot of inserts on one table and created
splitted-version tables of the big one.

Hence I'm not sure the problem is on the client...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> It looks as if you are being over-complex anyway. Isn't what you want
> 
> SELECT * FROM PARTITIONED_B ORDER BY RAND() LIMIT 1 ;
> 
> which picks a single random record.


No, I have to know which is the ID I pick before doing the select
(because I splitted my data, and when I access the splitted table
I have code like:

int table = ((m / 200) + 1);
String query = "SELECT * FROM PARTITIONED_B_" + table + " WHERE ID=?";
)


Thank you anyway



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: R: connectorJ & huge table problem

2004-06-22 Thread Alec . Cawley
"Leonardo Francalanci" <[EMAIL PROTECTED]> wrote on 22/06/2004 
14:26:54:

> > A non-streaming connection will place the resultset in memory. With 
large
> > tables you can run out of memory and the application will stop. 
> 
> 
> I get only 1 record per time (accessing by key), so I don't think
> it could be an issue.

It looks as if you are being over-complex anyway. Isn't what you want

SELECT * FROM PARTITIONED_B ORDER BY RAND() LIMIT 1 ;

which picks a single random record.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: connectorJ & huge table problem

2004-06-22 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Leonardo Francalanci wrote:

> I have a large table (64,000,000 rows).
> Everything was fine when the table was 16,000,000 rows,
> now connectorJ crashes:
>
> java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
>   at
>
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
> nt.java:908)
>   at
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
>
>
>
> my code is:
>
> java.sql.Connection regularConn =
ConnectionHandler.getRegularConnection();
>
> java.sql.PreparedStatement ps1 = null;
> //letspartyConn.prepareStatement(query);
> for (int i = 0; i < 1000; i++) {
>  String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
>  ps1 = regularConn.prepareStatement(query);
>  int m = new Double(Math.random() * 6400).intValue();
>  ps1.setInt(1, 12352597);
>  ResultSet rs = ps1.executeQuery();
>  while (rs.next()) {
>   int g = rs.getInt(2);
>  }
>  rs.close();
>  ps1.close();
>
> }
> regularConn.close();
>
>
>
>
> I debugged the code, and it seems to happen when it reads a long
> (readLongLong),
> but I don't have any data in my db that is long (everything is int, and in
> fact
> every time the method readLongLong gets called it crashes).
> The error seems very random (sometimes happens, some others no, and always
> with different IDs),
> I have no idea.
>
> Note that using the prepared statement in the proper way, that is:
>
> String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
> ps1 = regularConn.prepareStatement(query);
> for (int i = 0; i < 1000; i++) {
>  int m = new Double(Math.random() * 6400).intValue();
>  ps1.setInt(1, 12352597);
>  ResultSet rs = ps1.executeQuery();
>  while (rs.next()) {
>   int g = rs.getInt(2);
>  }
>  rs.close();
> }
> ps1.close();
> regularConn.close();
>
>
> everything works fine. Re-creating the PreparedStatement each time
gives me
> the error
> above when I call ResultSet rs = ps1.executeQuery().
>
> I'm using connectorJ 3.1.2, mysql 4.1.2

What happens if you use a nightly snapshot of Connector/J from
http://downloads.mysql.com/snapshots.php?

-Mark


- --
Mr. Mark Matthews
MySQL AB, Software Development Manager, J2EE and Windows Platforms
Office: +1 708 332 0507
www.mysql.com

MySQL Guide to Lower TCO
http://www.mysql.com/it-resources/white-papers/tco.php
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.3 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFA2EQltvXNTca6JD8RAr/LAJ4vKGikqzJTi/tRu2rq0rqbKvOEWACgqKG1
1mKgveQTUG+3c99Qb7XPiAM=
=g+xo
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
It only happens if I use the PreparedStatement.setInt(...)
Using a statement like
"SELECT * FROM PARTITIONED_B WHERE ID=" + m;

instead of
"SELECT * FROM PARTITIONED_B WHERE ID=?";
preparedStetement.setInt(1, m);
(which is more correct)

works.

If nobody has a clue I will report a bug in a couple of days...


> -Messaggio originale-
> Da: Leonardo Francalanci [mailto:[EMAIL PROTECTED]
> Inviato: martedì 22 giugno 2004 15.09
> A: Mysql
> Oggetto: R: connectorJ & huge table problem
>
>
> I have also weird results:
> I access data by ID, which is a primary key in the table.
> Some times (again, very random behaviour) I get 4 rows back
> (instead of just 1) and the data I get is always
>
> 1275749989
>
> This happens with random IDs.
>
> the code to check this is:
> [..]
>  ResultSet rs = ps1.executeQuery();
>  boolean c = false;
>  while (rs.next()) {
>   int g = rs.getInt(2);
>   if (c) {
>// this shouldn't happen, because I'm selecting by ID, but it happens!
>System.out.println(m + " " + g);
>   }
>   c = true;
>  }
>
> > -Messaggio originale-
> > Da: Leonardo Francalanci [mailto:[EMAIL PROTECTED]
> > Inviato: martedì 22 giugno 2004 14.43
> > A: Mysql
> > Oggetto: connectorJ & huge table problem
> >
> >
> > I have a large table (64,000,000 rows).
> > Everything was fine when the table was 16,000,000 rows,
> > now connectorJ crashes:
> >
> > java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
> > at
> > com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPrepa
> > redStateme
> > nt.java:908)
> > at
> >
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
> >
> >
> >
> > my code is:
> >
> > java.sql.Connection regularConn =
> > ConnectionHandler.getRegularConnection();
> >
> > java.sql.PreparedStatement ps1 = null;
> > //letspartyConn.prepareStatement(query);
> > for (int i = 0; i < 1000; i++) {
> >  String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
> >  ps1 = regularConn.prepareStatement(query);
> >  int m = new Double(Math.random() * 6400).intValue();
> >  ps1.setInt(1, 12352597);
> >  ResultSet rs = ps1.executeQuery();
> >  while (rs.next()) {
> >   int g = rs.getInt(2);
> >  }
> >  rs.close();
> >  ps1.close();
> >
> > }
> > regularConn.close();
> >
> >
> >
> >
> > I debugged the code, and it seems to happen when it reads a long
> > (readLongLong),
> > but I don't have any data in my db that is long (everything is
> int, and in
> > fact
> > every time the method readLongLong gets called it crashes).
> > The error seems very random (sometimes happens, some others no,
> and always
> > with different IDs),
> > I have no idea.
> >
> > Note that using the prepared statement in the proper way, that is:
> >
> > String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
> > ps1 = regularConn.prepareStatement(query);
> > for (int i = 0; i < 1000; i++) {
> >  int m = new Double(Math.random() * 6400).intValue();
> >  ps1.setInt(1, 12352597);
> >  ResultSet rs = ps1.executeQuery();
> >  while (rs.next()) {
> >   int g = rs.getInt(2);
> >  }
> >  rs.close();
> > }
> > ps1.close();
> > regularConn.close();
> >
> >
> > everything works fine. Re-creating the PreparedStatement each
> > time gives me
> > the error
> > above when I call ResultSet rs = ps1.executeQuery().
> >
> > I'm using connectorJ 3.1.2, mysql 4.1.2
> >
> >
> >
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> A non-streaming connection will place the resultset in memory. With large
> tables you can run out of memory and the application will stop. 


I get only 1 record per time (accessing by key), so I don't think
it could be an issue.

Thank you anyway!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: connectorJ & huge table problem

2004-06-22 Thread Victor Pendleton
A non-streaming connection will place the resultset in memory. With large
tables you can run out of memory and the application will stop. A streaming
resultset does not place the entire dataset in memory and therefore you can
retrieve `large` data sets. In order versions of connectorJ the default was
non streaming.

-Original Message-
From: Leonardo Francalanci
To: 'Mysql '
Sent: 6/22/04 7:55 AM
Subject: R: connectorJ & huge table problem

> Are you using a streaming connection? 

ehm... how do I know?
My conn string is

jdbc:mysql://192.168.0.253:3000/LETSPARTY1?autoReconnect=yes

where 192.168.0.253 is a machine on my LAN...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> Dude.  I wouldn't post your server ip here.  ouch.

Well, it's a LAN address!



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: connectorJ & huge table problem

2004-06-22 Thread Paul McNeil
Dude.  I wouldn't post your server ip here.  ouch.



God Bless

Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.













GOD BLESS AMERICA!
To God Be The Glory!

-Original Message-
From: Leonardo Francalanci [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 22, 2004 8:55 AM
To: 'Mysql '
Subject: R: connectorJ & huge table problem


> Are you using a streaming connection? 

ehm... how do I know?
My conn string is

jdbc:mysql://192.168.0.253:3000/LETSPARTY1?autoReconnect=yes

where 192.168.0.253 is a machine on my LAN...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A Complicated Group Query

2004-06-22 Thread SGreen

Hi Shaun,

You said:
>SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
>Bookings B ON U.User_ID = B.User_ID AND B.Project_ID = '10' GROUP
>BY(U.User_Location);
>
>It returns all User Locations in the User table, regardless of the
>Project_ID of the Booking.

That's exactly how the LEFT JOIN is supposed to work. The users are on the
LEFT side of the LEFT JOIN so you will get _all_ rows from that table and
only those rows from the table on the *other* side of the join that satisfy
your ON conditions.

You can flip your query two ways. Change either LEFT JOIN to RIGHT JOIN or
exchange the positions of the table names around the JOIN keywords

SELECT COUNT(B.Booking_ID), U.User_Location
FROM Users U
RIGHT OUTER JOIN Bookings B
  ON U.User_ID = B.User_ID
  AND B.Project_ID = '10'
GROUP BY(U.User_Location);
-OR-
SELECT COUNT(B.Booking_ID), U.User_Location
FROM Bookings B
RIGHT OUTER JOIN Users U
  ON U.User_ID = B.User_ID
  AND B.Project_ID = '10'
GROUP BY(U.User_Location);

Either one of these statements puts the Bookings table into the
"controlling" side of the partial join.


Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


   
 
  "shaun thornburgh"   
 
  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
 
  otmail.com>   cc:
 
Fax to:
 
  06/22/2004 09:41  Subject:  A Complicated Group Query
 
  AM   
 
   
 
   
 




Hi,

I have three tables (among others!) in my database; Bookings, Users and
Projects. Users have location codes and can make bookings which are for a
particular project.

Using the following query I can get a list of all Locations (Each user has
a
Location code) and the number of Bookings that have taken place for that
location.

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID GROUP BY(U.User_Location);

This is useful as it shows zero values where no bookings have taken place.
However, Each booking is related to a particular project by Project_ID. How

can I modify this query such that it only returns a count of bookings where

the Project_ID = 'x'? If I modify this query to:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID AND B.Project_ID = '10' GROUP
BY(U.User_Location);

It returns all User Locations in the User table, regardless of the
Project_ID of the Booking. And if I modify the query to:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN
Bookings B ON U.User_ID = B.User_ID WHERE B.Project_ID = '10' GROUP
BY(U.User_Location);

I lose the zero/NULL Values...

Any comments here would be greatly apprecited!

_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
I have also weird results:
I access data by ID, which is a primary key in the table.
Some times (again, very random behaviour) I get 4 rows back
(instead of just 1) and the data I get is always

1275749989

This happens with random IDs.

the code to check this is:
[..]
 ResultSet rs = ps1.executeQuery();
 boolean c = false;
 while (rs.next()) {
  int g = rs.getInt(2);
  if (c) {
   // this shouldn't happen, because I'm selecting by ID, but it happens!
   System.out.println(m + " " + g);
  }
  c = true;
 }

> -Messaggio originale-
> Da: Leonardo Francalanci [mailto:[EMAIL PROTECTED]
> Inviato: martedì 22 giugno 2004 14.43
> A: Mysql
> Oggetto: connectorJ & huge table problem
>
>
> I have a large table (64,000,000 rows).
> Everything was fine when the table was 16,000,000 rows,
> now connectorJ crashes:
>
> java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
>   at
> com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPrepa
> redStateme
> nt.java:908)
>   at
> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)
>
>
>
> my code is:
>
> java.sql.Connection regularConn =
> ConnectionHandler.getRegularConnection();
>
> java.sql.PreparedStatement ps1 = null;
> //letspartyConn.prepareStatement(query);
> for (int i = 0; i < 1000; i++) {
>  String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
>  ps1 = regularConn.prepareStatement(query);
>  int m = new Double(Math.random() * 6400).intValue();
>  ps1.setInt(1, 12352597);
>  ResultSet rs = ps1.executeQuery();
>  while (rs.next()) {
>   int g = rs.getInt(2);
>  }
>  rs.close();
>  ps1.close();
>
> }
> regularConn.close();
>
>
>
>
> I debugged the code, and it seems to happen when it reads a long
> (readLongLong),
> but I don't have any data in my db that is long (everything is int, and in
> fact
> every time the method readLongLong gets called it crashes).
> The error seems very random (sometimes happens, some others no, and always
> with different IDs),
> I have no idea.
>
> Note that using the prepared statement in the proper way, that is:
>
> String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
> ps1 = regularConn.prepareStatement(query);
> for (int i = 0; i < 1000; i++) {
>  int m = new Double(Math.random() * 6400).intValue();
>  ps1.setInt(1, 12352597);
>  ResultSet rs = ps1.executeQuery();
>  while (rs.next()) {
>   int g = rs.getInt(2);
>  }
>  rs.close();
> }
> ps1.close();
> regularConn.close();
>
>
> everything works fine. Re-creating the PreparedStatement each
> time gives me
> the error
> above when I call ResultSet rs = ps1.executeQuery().
>
> I'm using connectorJ 3.1.2, mysql 4.1.2
>
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Changing Innodb Variables without restarting the server

2004-06-22 Thread Dave Dutcher
Hi,
 
I'm running MySQL 4.0.17-nt, and for some reason the my.cnf file in my
C:\ directory got renamed around the time we rebooted the server last.
Now the global variable innodb_buffer_pool_size is set to a small value,
and my queries are taking a long time to execute.  I don't want to
restart the service if I don't have too.  Does anybody know a way to
change the value of the innodb_buffer_pool_size variable at runtime?  Or
is this just not possible?
 
Thanks,
 
Dave


R: connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
> Are you using a streaming connection? 

ehm... how do I know?
My conn string is

jdbc:mysql://192.168.0.253:3000/LETSPARTY1?autoReconnect=yes

where 192.168.0.253 is a machine on my LAN...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: connectorJ & huge table problem

2004-06-22 Thread Victor Pendleton
Are you using a streaming connection? 

-Original Message-
From: Leonardo Francalanci
To: Mysql
Sent: 6/22/04 7:43 AM
Subject: connectorJ & huge table problem

I have a large table (64,000,000 rows).
Everything was fine when the table was 16,000,000 rows,
now connectorJ crashes:

java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedSta
teme
nt.java:908)
at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:162
2)



my code is:

java.sql.Connection regularConn =
ConnectionHandler.getRegularConnection();

java.sql.PreparedStatement ps1 = null;
//letspartyConn.prepareStatement(query);
for (int i = 0; i < 1000; i++) {
 String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
 ps1 = regularConn.prepareStatement(query);
 int m = new Double(Math.random() * 6400).intValue();
 ps1.setInt(1, 12352597);
 ResultSet rs = ps1.executeQuery();
 while (rs.next()) {
  int g = rs.getInt(2);
 }
 rs.close();
 ps1.close();

}
regularConn.close();




I debugged the code, and it seems to happen when it reads a long
(readLongLong),
but I don't have any data in my db that is long (everything is int, and
in
fact
every time the method readLongLong gets called it crashes).
The error seems very random (sometimes happens, some others no, and
always
with different IDs),
I have no idea.

Note that using the prepared statement in the proper way, that is:

String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
ps1 = regularConn.prepareStatement(query);
for (int i = 0; i < 1000; i++) {
 int m = new Double(Math.random() * 6400).intValue();
 ps1.setInt(1, 12352597);
 ResultSet rs = ps1.executeQuery();
 while (rs.next()) {
  int g = rs.getInt(2);
 }
 rs.close();
}
ps1.close();
regularConn.close();


everything works fine. Re-creating the PreparedStatement each time gives
me
the error
above when I call ResultSet rs = ps1.executeQuery().

I'm using connectorJ 3.1.2, mysql 4.1.2





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



connectorJ & huge table problem

2004-06-22 Thread Leonardo Francalanci
I have a large table (64,000,000 rows).
Everything was fine when the table was 16,000,000 rows,
now connectorJ crashes:

java.sql.SQLException: java.lang.ArrayIndexOutOfBoundsException: 6
at
com.mysql.jdbc.ServerPreparedStatement.executeInternal(ServerPreparedStateme
nt.java:908)
at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1622)



my code is:

java.sql.Connection regularConn = ConnectionHandler.getRegularConnection();

java.sql.PreparedStatement ps1 = null;
//letspartyConn.prepareStatement(query);
for (int i = 0; i < 1000; i++) {
 String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
 ps1 = regularConn.prepareStatement(query);
 int m = new Double(Math.random() * 6400).intValue();
 ps1.setInt(1, 12352597);
 ResultSet rs = ps1.executeQuery();
 while (rs.next()) {
  int g = rs.getInt(2);
 }
 rs.close();
 ps1.close();

}
regularConn.close();




I debugged the code, and it seems to happen when it reads a long
(readLongLong),
but I don't have any data in my db that is long (everything is int, and in
fact
every time the method readLongLong gets called it crashes).
The error seems very random (sometimes happens, some others no, and always
with different IDs),
I have no idea.

Note that using the prepared statement in the proper way, that is:

String query = "SELECT * FROM PARTITIONED_B WHERE ID=?";
ps1 = regularConn.prepareStatement(query);
for (int i = 0; i < 1000; i++) {
 int m = new Double(Math.random() * 6400).intValue();
 ps1.setInt(1, 12352597);
 ResultSet rs = ps1.executeQuery();
 while (rs.next()) {
  int g = rs.getInt(2);
 }
 rs.close();
}
ps1.close();
regularConn.close();


everything works fine. Re-creating the PreparedStatement each time gives me
the error
above when I call ResultSet rs = ps1.executeQuery().

I'm using connectorJ 3.1.2, mysql 4.1.2





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A Complicated Group Query

2004-06-22 Thread shaun thornburgh
Hi,
I have three tables (among others!) in my database; Bookings, Users and 
Projects. Users have location codes and can make bookings which are for a 
particular project.

Using the following query I can get a list of all Locations (Each user has a 
Location code) and the number of Bookings that have taken place for that 
location.

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.User_ID GROUP BY(U.User_Location);

This is useful as it shows zero values where no bookings have taken place. 
However, Each booking is related to a particular project by Project_ID. How 
can I modify this query such that it only returns a count of bookings where 
the Project_ID = 'x'? If I modify this query to:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.User_ID AND B.Project_ID = '10' GROUP 
BY(U.User_Location);

It returns all User Locations in the User table, regardless of the 
Project_ID of the Booking. And if I modify the query to:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.User_ID WHERE B.Project_ID = '10' GROUP 
BY(U.User_Location);

I lose the zero/NULL Values...
Any comments here would be greatly apprecited!
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Issuing Command Line iInstructions from Web Application

2004-06-22 Thread Chris Gilbert
Is it possible to use the mysqldump function from within a web application 
on a Windows based server using PERL or ASP or do I have to be logged in to 
the actual command line MySQL Client with Root Privileges?

Thanks in advance,
Chris
_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


R: R: R: why CPU is high while disks are idle in a table scan???

2004-06-22 Thread Leonardo Francalanci
Don't worry, I found: now my table has 64,000,000 rows...
there is no way it can cache it in memory...
Now the problem I have is that connectorJ crashes
(hasn't crashed before, when the table had 16,000,000 rows)

I'm going to write another email...

thank you anyway

> -Messaggio originale-
> Da: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Inviato: martedi 22 giugno 2004 15.22
> A: Leonardo Francalanci
> Cc: Mysql
> Oggetto: Re: R: R: why CPU is high while disks are idle in a table
> scan???
>
>
>
> Well, this is a very unique question. How can we force MySQL to IGNORE an
> index.  Is this so you can test your disk systems data throughput? I am
> quite interested in why you need to see the disk activity in order to
> understand if it's working or not.
>
> I have concentrated so hard and so long on making sure that it DOES use
> indices whenever they are available I am not sure how to answer that
> question. I know there is a setting that controls the size of the "key
> buffer" perhaps if you tune it to be really small it will force MySQL to
> read the indices straight off of the disk (or maybe not, I am not sure).
>
> Have you tried specifying a "wrong" index in your select statement (so as
> to force a table scan)  with the USE keyword?
>
>  Another solution could be to just get rid of the indices. Then
> ALL lookups
> would be table scans. You could replace each primary key with a unique
> constraint (not a unique index) to keep out the duplicates.
>
> Or search on a column that is not part of an index.
>
> Another solution could be to restart the server just before each test.
> Doing that will ensure that the cache is empty and that any data
> that needs
> to come from disk will do so (including indices). Merely flushing
> the cache
> while running will commit changes to disk but I don't believe that it will
> clear the cache.
>
> The excellent people on the MySQL internals discussion may be better
> equipped to answer this.
>
> Curiously yours,
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
>
>
>
>
>
>
>   "Leonardo
>
>   Francalanci" To:   "Mysql"
> <[EMAIL PROTECTED]>
>   <[EMAIL PROTECTED]cc:
>
>   tel.ie>  Fax to:
>
>Subject:  R: R:
> why CPU is high while disks are idle in a table scan???
>   06/22/2004 03:55
>
>   AM
>
>
>
>
>
>
>
>
>
> First: thank you for sparing so much time with me.
>
> > There are 3 major factors determining your performance: 1) the
> > speed of the
> > CPU, 2) the size if your RAM, and 3) the data transfer speed of
> > your disks.
>
> I thought that the size of tables in the db would have made a difference.
> From the analysis you did and from my tests it doesn't look so...
>
> > 5) If your application is smart enough, you can split your data over
> > several servers. This would be a SERIOUS performance increase.
>
> In fact I'm writing a jdbc driver in java that I hope will be included
> in c-jdbc (http://c-jdbc.objectweb.org/), which is a java database
> cluster middleware (check it out, it seems very interesting)
>
> > 2) you can move parts of your data to other disks. Using multiple I/O
> > channels should be faster than a single channel.
>
> That is what I am trying right now, but I am having problems because
> it seems that there is no disk I/O if I access by index on my tables,
> even if I access most of the table data (while with a table scan
> disks work as expected).
>
> Does it happen because I have 1Gb of Ram? If yes: I know that this
> sounds like a silly question but... How can I "fill" the memory of my
> machine so it doesn't cache data? (I can't just remove memory, I don't
> have physical access to the machine)
>
>
> Thank you
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
>
>
>
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-22 Thread SGreen

Great catch! I believe you found the problem. Sorry about that!   ;-D

So... just curious... from the original 60GB of text data, how much space
did the final data require (including indices)?

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  "J S"

  <[EMAIL PROTECTED]To:   [EMAIL PROTECTED]
 
  com> cc:   [EMAIL PROTECTED] 

   Fax to: 

  06/22/2004 07:55 Subject:  Re: load data into 2 tables 
and set id
  AM   

   

   





I think I fixed it!

INSERT IGNORE url_visit (url_server_ID, url_path_ID, querystring,
category)
SELECT DISTINCT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
   ON us.server = bt.server
INNER JOIN url_paths up
   on up.path=bt.path

After this I ran the sql to insert into internet_usage and this time it
finished after 10m32.

I'll do some more testing and let you know how it goes.

Thanks,

js.
>
>Did you mean there to be duplicates in the url_visits? Do I need to use
>IGNORE in the following SQL?
>
>INSERT url_visit (url_server_ID, url_path_ID, querystring,
>category)
>SELECT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
>NULL),
>bt.category
>FROM bulk_table bt
>INNER JOIN url_servers us
>   ON us.server = bt.server
>INNER JOIN url_paths up
>   on up.path=bt.path
>
>js.
>
>mysql> select * from url_visit where urlid=1631;
>+---+---+-++--+

>| urlid | url_server_ID | url_path_ID | query

>

> | category |
>+---+---+-++--+

>|  1631 | 21720 |  630695 |
>cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift

>| Online Sales |
>+---+---+-++--+

>1 row in set (0.01 sec)
>
>mysql> select * from url_visit where urlid=1753;
>+---+---+-++--+

>| urlid | url_server_ID | url_path_ID | query

>

> | category |
>+---+---+-++--+

>|  1753 | 21720 |  630695 |
>cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift

>| Online Sales |
>+---+---+-++--+

>1 row in set (0.00 sec)
>
>>
>>Shawn,
>>
>>Thanks for your reply below. I found it extremely useful. I have followed

>>your instructions and got good results up to the url_visits table.
>>
>>I have a perl script to parse the values out of the log. The log has
>>3,770,246 lines and is gzipped. I then applied your SQL statements with
>>the following times:
>>
>>1. Parse script: 10m31
>>2. Load infile: 2m39
>>3. Insert url_servers and url_paths: 5m38
>>4. Insert url_visits 9m50
>>
>>However when I ran the sql to insert the internet usage table, it just
ran
>>until all the log space was filled up:
>>
>>-rw-rw   1 mysqlmysql379694680 Jun 22 10:05 bulk_table.MYD

>><
>>-rw-rw   1 mysqlmysql   1024 Jun 22 10:05 bulk_table.MYI
>>-rw-rw   1 mysqlmysql   8760 Jun 22 09:59 bulk_table.frm
>>-rw-rw   1 mysqlmysql2114977792 Jun 22 11:11
>>internet_usage.MYD <
>>-rw-rw   1 mysqlmysql   1024 Jun 22 10:34 internet_usage.MYI
>>-rw-rw   1 mysqlmysql   8646 Jun 22 09:59 internet_usage.

Re: R: R: why CPU is high while disks are idle in a table scan???

2004-06-22 Thread SGreen

Well, this is a very unique question. How can we force MySQL to IGNORE an
index.  Is this so you can test your disk systems data throughput? I am
quite interested in why you need to see the disk activity in order to
understand if it's working or not.

I have concentrated so hard and so long on making sure that it DOES use
indices whenever they are available I am not sure how to answer that
question. I know there is a setting that controls the size of the "key
buffer" perhaps if you tune it to be really small it will force MySQL to
read the indices straight off of the disk (or maybe not, I am not sure).

Have you tried specifying a "wrong" index in your select statement (so as
to force a table scan)  with the USE keyword?

 Another solution could be to just get rid of the indices. Then ALL lookups
would be table scans. You could replace each primary key with a unique
constraint (not a unique index) to keep out the duplicates.

Or search on a column that is not part of an index.

Another solution could be to restart the server just before each test.
Doing that will ensure that the cache is empty and that any data that needs
to come from disk will do so (including indices). Merely flushing the cache
while running will commit changes to disk but I don't believe that it will
clear the cache.

The excellent people on the MySQL internals discussion may be better
equipped to answer this.

Curiously yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




   

  "Leonardo

  Francalanci" To:   "Mysql" <[EMAIL PROTECTED]>   

  <[EMAIL PROTECTED]cc:
 
  tel.ie>  Fax to: 

   Subject:  R: R: why CPU is high while 
disks are idle in a table scan??? 
  06/22/2004 03:55 

  AM   

   

   





First: thank you for sparing so much time with me.

> There are 3 major factors determining your performance: 1) the
> speed of the
> CPU, 2) the size if your RAM, and 3) the data transfer speed of
> your disks.

I thought that the size of tables in the db would have made a difference.
>From the analysis you did and from my tests it doesn't look so...

> 5) If your application is smart enough, you can split your data over
> several servers. This would be a SERIOUS performance increase.

In fact I'm writing a jdbc driver in java that I hope will be included
in c-jdbc (http://c-jdbc.objectweb.org/), which is a java database
cluster middleware (check it out, it seems very interesting)

> 2) you can move parts of your data to other disks. Using multiple I/O
> channels should be faster than a single channel.

That is what I am trying right now, but I am having problems because
it seems that there is no disk I/O if I access by index on my tables,
even if I access most of the table data (while with a table scan
disks work as expected).

Does it happen because I have 1Gb of Ram? If yes: I know that this
sounds like a silly question but... How can I "fill" the memory of my
machine so it doesn't cache data? (I can't just remove memory, I don't
have physical access to the machine)


Thank you


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql action history

2004-06-22 Thread web tur
Dear Sirs,

I have mysql database on the web server. Is there any
way to see mysql actions from any log? I want to see
what my visitors did on the sql databases.

Any idea

Best Regards



__
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and RHAS

2004-06-22 Thread gerald_clark

Oscar Hernández Hernández wrote:
Hello:
I have just installed a Linux RHAS 3.0 with mysql-3.23.58-1,
mysql-devel-3.23.58-1 and mysql-server-3.23.58-1.9. 

When mysql starts, there is just listening one mysql-daemon. 

When I had this configuration running in Redhat 7.3, mysql started 4
No, 4 threads.
daemons.
Could someone explain me this behavior? What must I do to start more mysql
daemons?
Nothing. It is the ps command that is performing differently.
The newer ps does not show separate threads.
Thanks in advance
Saludos 

Óscar Hernández Hernández
Programador de sistemas
Tl: +34 928 45 29 38
FAX:+34 928 45 29 34
España
Islas Canarias
Las Palmas de Gran Canaria
35003
Edificio de Humanidades

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Logging Admin Activity

2004-06-22 Thread JDelph
Is there a way to log all activity for an Admin account only?

Thank you

Re: how to randomnly select just 1 record from the table?

2004-06-22 Thread Egor Egorov
"Binay" <[EMAIL PROTECTED]> wrote:
> well i have only 4 records in the table. When i use below mentioned query it
> always return the same record irrespective of time delay. Now wht can i do?

What version of MySQL do you use?

> 
> Thanks
> Binay
> 
>> "Binay" <[EMAIL PROTECTED]> wrote on 21/06/2004 15:03:34:
>>
>>
>> > I need to select randomnly just one record from the table.
>> > how can i do? please help me out.
>>
>> SELECT * FROM tbl_name ORDER BY RAND() LIMIT 1 ;
>>



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL InnoDB tables dump [CASCADE]

2004-06-22 Thread Egor Egorov
Milan Svrlo <[EMAIL PROTECTED]> wrote:
> Hi,
> 
> I'd ask you :
> Can I use in MySQL 4.x on InnoDB tables something like :
> create dump from this InnoDB tables with cascade INSERT sql statements 
> based foreign keys ?
> And if yes, how ?
> 

No.
Probably you need mysqldump and SET FOREIGN_KEY_CHECKS = 0 command:
http://dev.mysql.com/doc/mysql/en/InnoDB_foreign_key_constraints.html
http://dev.mysql.com/doc/mysql/en/mysqldump.html



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-22 Thread J S
I think I fixed it!
INSERT IGNORE url_visit (url_server_ID, url_path_ID, querystring,
category)
SELECT DISTINCT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
  ON us.server = bt.server
INNER JOIN url_paths up
  on up.path=bt.path
After this I ran the sql to insert into internet_usage and this time it 
finished after 10m32.

I'll do some more testing and let you know how it goes.
Thanks,
js.
Did you mean there to be duplicates in the url_visits? Do I need to use 
IGNORE in the following SQL?

INSERT url_visit (url_server_ID, url_path_ID, querystring,
category)
SELECT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
  ON us.server = bt.server
INNER JOIN url_paths up
  on up.path=bt.path
js.
mysql> select * from url_visit where urlid=1631;
+---+---+-++--+
| urlid | url_server_ID | url_path_ID | query   
   
| category |
+---+---+-++--+
|  1631 | 21720 |  630695 | 
cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift 
| Online Sales |
+---+---+-++--+
1 row in set (0.01 sec)

mysql> select * from url_visit where urlid=1753;
+---+---+-++--+
| urlid | url_server_ID | url_path_ID | query   
   
| category |
+---+---+-++--+
|  1753 | 21720 |  630695 | 
cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift 
| Online Sales |
+---+---+-++--+
1 row in set (0.00 sec)

Shawn,
Thanks for your reply below. I found it extremely useful. I have followed 
your instructions and got good results up to the url_visits table.

I have a perl script to parse the values out of the log. The log has 
3,770,246 lines and is gzipped. I then applied your SQL statements with 
the following times:

1. Parse script: 10m31
2. Load infile: 2m39
3. Insert url_servers and url_paths: 5m38
4. Insert url_visits 9m50
However when I ran the sql to insert the internet usage table, it just ran 
until all the log space was filled up:

-rw-rw   1 mysqlmysql379694680 Jun 22 10:05 bulk_table.MYD 
   <
-rw-rw   1 mysqlmysql   1024 Jun 22 10:05 bulk_table.MYI
-rw-rw   1 mysqlmysql   8760 Jun 22 09:59 bulk_table.frm
-rw-rw   1 mysqlmysql2114977792 Jun 22 11:11 
internet_usage.MYD <
-rw-rw   1 mysqlmysql   1024 Jun 22 10:34 internet_usage.MYI
-rw-rw   1 mysqlmysql   8646 Jun 22 09:59 internet_usage.frm
-rw-rw   1 mysqlmysql33437600 Jun 22 10:23 url_paths.MYD
-rw-rw   1 mysqlmysql27696128 Jun 22 10:23 url_paths.MYI
-rw-rw   1 mysqlmysql   8574 Jun 22 09:59 url_paths.frm
-rw-rw   1 mysqlmysql 646160 Jun 22 10:19 url_servers.MYD
-rw-rw   1 mysqlmysql 796672 Jun 22 10:19 url_servers.MYI
-rw-rw   1 mysqlmysql   8578 Jun 22 09:59 url_servers.frm
-rw-rw   1 mysqlmysql119076844 Jun 22 10:32 url_visit.MYD
-rw-rw   1 mysqlmysql73026560 Jun 22 10:33 url_visit.MYI
-rw-rw   1 mysqlmysql   8694 Jun 22 09:59 url_visit.frm

I'm sure something's not right because the internet Usage table is bigger 
than the bulk table. I changed your sql a bit (on the last line with the 
query). Could this be the problem?

INSERT internet_usage (uid,time,ip,urlid,size)
SELECT bt.user, bt.time, bt.ip, uv.urlid, bt.size
FROM bulk_table bt
INNER JOIN url_servers us
   ON us.server=bt.server
INNER JOIN url_paths up
   ON up.path=bt.path
INNER JOIN url_visit uv
   ON uv.url_server_ID=us.ID
   AND uv.url_path_ID=up.ID
   AND uv.query=bt.query;
Thanks again,
js.
 welcome to a basic overview of bulk importing and normalizing as
you go --

Mysql and RHAS

2004-06-22 Thread Oscar Hernández Hernández


Hello:

I have just installed a Linux RHAS 3.0 with mysql-3.23.58-1,
mysql-devel-3.23.58-1 and mysql-server-3.23.58-1.9. 

When mysql starts, there is just listening one mysql-daemon. 

When I had this configuration running in Redhat 7.3, mysql started 4
daemons.

Could someone explain me this behavior? What must I do to start more mysql
daemons?

Thanks in advance


Saludos 

Óscar Hernández Hernández
Programador de sistemas
Tl: +34 928 45 29 38
FAX:+34 928 45 29 34
España
Islas Canarias
Las Palmas de Gran Canaria
35003
Edificio de Humanidades



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: load data into 2 tables and set id

2004-06-22 Thread J S
Did you mean there to be duplicates in the url_visits? Do I need to use 
IGNORE in the following SQL?

INSERT url_visit (url_server_ID, url_path_ID, querystring,
category)
SELECT us.ID, up.ID, if(bt.path_split >0, SUBSTRING(bt.url,path),
NULL),
bt.category
FROM bulk_table bt
INNER JOIN url_servers us
  ON us.server = bt.server
INNER JOIN url_paths up
  on up.path=bt.path
js.
mysql> select * from url_visit where urlid=1631;
+---+---+-++--+
| urlid | url_server_ID | url_path_ID | query

  | category |
+---+---+-++--+
|  1631 | 21720 |  630695 | 
cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift 
| Online Sales |
+---+---+-++--+
1 row in set (0.01 sec)

mysql> select * from url_visit where urlid=1753;
+---+---+-++--+
| urlid | url_server_ID | url_path_ID | query

  | category |
+---+---+-++--+
|  1753 | 21720 |  630695 | 
cid=mrkbaki&src=catcmrkdswi|p1|rshop/catcmrkseai|p1|rshop/catcmrkhori|p1|rshop/catcmrki|p1|rshop/catcripa|p1|rgift 
| Online Sales |
+---+---+-++--+
1 row in set (0.00 sec)

Shawn,
Thanks for your reply below. I found it extremely useful. I have followed 
your instructions and got good results up to the url_visits table.

I have a perl script to parse the values out of the log. The log has 
3,770,246 lines and is gzipped. I then applied your SQL statements with the 
following times:

1. Parse script: 10m31
2. Load infile: 2m39
3. Insert url_servers and url_paths: 5m38
4. Insert url_visits 9m50
However when I ran the sql to insert the internet usage table, it just ran 
until all the log space was filled up:

-rw-rw   1 mysqlmysql379694680 Jun 22 10:05 bulk_table.MYD  
  <
-rw-rw   1 mysqlmysql   1024 Jun 22 10:05 bulk_table.MYI
-rw-rw   1 mysqlmysql   8760 Jun 22 09:59 bulk_table.frm
-rw-rw   1 mysqlmysql2114977792 Jun 22 11:11 internet_usage.MYD 
<
-rw-rw   1 mysqlmysql   1024 Jun 22 10:34 internet_usage.MYI
-rw-rw   1 mysqlmysql   8646 Jun 22 09:59 internet_usage.frm
-rw-rw   1 mysqlmysql33437600 Jun 22 10:23 url_paths.MYD
-rw-rw   1 mysqlmysql27696128 Jun 22 10:23 url_paths.MYI
-rw-rw   1 mysqlmysql   8574 Jun 22 09:59 url_paths.frm
-rw-rw   1 mysqlmysql 646160 Jun 22 10:19 url_servers.MYD
-rw-rw   1 mysqlmysql 796672 Jun 22 10:19 url_servers.MYI
-rw-rw   1 mysqlmysql   8578 Jun 22 09:59 url_servers.frm
-rw-rw   1 mysqlmysql119076844 Jun 22 10:32 url_visit.MYD
-rw-rw   1 mysqlmysql73026560 Jun 22 10:33 url_visit.MYI
-rw-rw   1 mysqlmysql   8694 Jun 22 09:59 url_visit.frm

I'm sure something's not right because the internet Usage table is bigger 
than the bulk table. I changed your sql a bit (on the last line with the 
query). Could this be the problem?

INSERT internet_usage (uid,time,ip,urlid,size)
SELECT bt.user, bt.time, bt.ip, uv.urlid, bt.size
FROM bulk_table bt
INNER JOIN url_servers us
   ON us.server=bt.server
INNER JOIN url_paths up
   ON up.path=bt.path
INNER JOIN url_visit uv
   ON uv.url_server_ID=us.ID
   AND uv.url_path_ID=up.ID
   AND uv.query=bt.query;
Thanks again,
js.
 welcome to a basic overview of bulk importing and normalizing as
you go 
[ author's note: if you are seeing this thread for the first time and
certain items seem to be
introduced out of context, please review all previous posts in this 
thread.
There has been
a lot of information already exchanged on this topic that I clipped
out of this response. Thanks! -- SG]

In an earlier post you said
>The data values I have for each record are:
>
>user_id   date_time   size   url   category
>for example:
>
>u752359   2004-04-02 12:33:04   3403   htt

Re: load data into 2 tables and set id

2004-06-22 Thread J S
Shawn,
Thanks for your reply below. I found it extremely useful. I have followed 
your instructions and got good results up to the url_visits table.

I have a perl script to parse the values out of the log. The log has 
3,770,246 lines and is gzipped. I then applied your SQL statements with the 
following times:

1. Parse script: 10m31
2. Load infile: 2m39
3. Insert url_servers and url_paths: 5m38
4. Insert url_visits 9m50
However when I ran the sql to insert the internet usage table, it just ran 
until all the log space was filled up:

-rw-rw   1 mysqlmysql379694680 Jun 22 10:05 bulk_table.MYD   
 <
-rw-rw   1 mysqlmysql   1024 Jun 22 10:05 bulk_table.MYI
-rw-rw   1 mysqlmysql   8760 Jun 22 09:59 bulk_table.frm
-rw-rw   1 mysqlmysql2114977792 Jun 22 11:11 internet_usage.MYD  
   <
-rw-rw   1 mysqlmysql   1024 Jun 22 10:34 internet_usage.MYI
-rw-rw   1 mysqlmysql   8646 Jun 22 09:59 internet_usage.frm
-rw-rw   1 mysqlmysql33437600 Jun 22 10:23 url_paths.MYD
-rw-rw   1 mysqlmysql27696128 Jun 22 10:23 url_paths.MYI
-rw-rw   1 mysqlmysql   8574 Jun 22 09:59 url_paths.frm
-rw-rw   1 mysqlmysql 646160 Jun 22 10:19 url_servers.MYD
-rw-rw   1 mysqlmysql 796672 Jun 22 10:19 url_servers.MYI
-rw-rw   1 mysqlmysql   8578 Jun 22 09:59 url_servers.frm
-rw-rw   1 mysqlmysql119076844 Jun 22 10:32 url_visit.MYD
-rw-rw   1 mysqlmysql73026560 Jun 22 10:33 url_visit.MYI
-rw-rw   1 mysqlmysql   8694 Jun 22 09:59 url_visit.frm

I'm sure something's not right because the internet Usage table is bigger 
than the bulk table. I changed your sql a bit (on the last line with the 
query). Could this be the problem?

INSERT internet_usage (uid,time,ip,urlid,size)
SELECT bt.user, bt.time, bt.ip, uv.urlid, bt.size
FROM bulk_table bt
INNER JOIN url_servers us
   ON us.server=bt.server
INNER JOIN url_paths up
   ON up.path=bt.path
INNER JOIN url_visit uv
   ON uv.url_server_ID=us.ID
   AND uv.url_path_ID=up.ID
   AND uv.query=bt.query;
Thanks again,
js.
 welcome to a basic overview of bulk importing and normalizing as
you go 
[ author's note: if you are seeing this thread for the first time and
certain items seem to be
introduced out of context, please review all previous posts in this thread.
There has been
a lot of information already exchanged on this topic that I clipped
out of this response. Thanks! -- SG]
In an earlier post you said
>The data values I have for each record are:
>
>user_id   date_time   size   url   category
>for example:
>
>u752359   2004-04-02 12:33:04   3403   http://www.mysql.com/index.html
business
>
To me that implies that you have a standard text log where each field is
separated by a space and rows are delimited by a CRLF pair. To get that log
into MySQL you will need to do something _like_ (I say "like" as you will
most likely need to tweak it to accommodate your actual data)
CREATE TABLE bulk_table (
  user varchar(10)
  , eventdate date
  , eventtime time
  , size int
  , url text
  , category varchar(50)
);
LOAD DATA INFILE  INTO bulk_table FIELDS TERMINATED BY '
' LINES TERMINATED BY '\r\n';
Now we can start merging the logs into your data tables.
To quote RFC 2396:
   The URI syntax does not require that the scheme-specific-part have
   any general structure or set of semantics which is common among all
   URI.  However, a subset of URI do share a common syntax for
   representing hierarchical relationships within the namespace.  This
   "generic URI" syntax consists of a sequence of four main components:
  ://?
   each of which, except , may be absent from a particular URI.
That translates into 4 logical pieces we can split a URL into:
1) the scheme -- HTTP, FTP, GOPHER, etc...
2) the authority -- that is the server (www.yahoo.com) and any port numbers
or login information
3) the path -- /somefolder/somefile.whatever
4) the query -- everything after the ?
Not part of the generic URI is that bit known as a fragment (as identified
by the #) it is ALWAYS at the end of the _entire URL_ (including the query)
when it's used. I have always lumped those and queries into the same field.
So you could create tables for each of those parts and get VERY normalized
or you can partly normalize like this:
ALTER TABLE bulk_table add server_split int not null default 0
  , add path_split int not null default 0
  , add server varchar(255)
  , add path varchar(255)
UPDATE bulk_table
set server_split = LOCATE('/', URL , 8)-1;
UPDATE bulk_table
SET path_split = if(LOCATE('?', URL, server_split)> 0, LOCATE('?', URL,
server_split),  LOCATE('#', URL, server_split)-1);
UPDATE bulk_table
set server=LEFT(URL, server_split )
#those 4 new columns helped us to parse out the 3 major parts of the url
#I added them to the table so that we would not have to keep reca

Re: LIBMYSQL.dll Not found

2004-06-22 Thread Sumito_Oda
Hi,

Perhaps, you should copy mysql/lib/opt/libmysql.dll
to mysql/bin directory.

Refer to the following thread of the MySQL-Win mailing list. 
http://lists.mysql.com/win32/14799

-- 
Sumito_Oda mailto:[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL InnoDB tables dump [CASCADE]

2004-06-22 Thread Milan Svrlo
Hi,
I'd ask you :
Can I use in MySQL 4.x on InnoDB tables something like :
create dump from this InnoDB tables with cascade INSERT sql statements 
based foreign keys ?
And if yes, how ?

thank you very much
regards
Milan Svrlo
-=x=-
Skontrolované antivírovým programom NOD32
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: New to Dates - Plain English Please

2004-06-22 Thread Peter Lovatt
hi

SELECT DATE_FORMAT(date_field ,"%M %D %Y")
 FROM table1
ORDER BY date_field

will order in true chronological order 

Peter



> -Original Message-
> From: David Blomstrom [mailto:[EMAIL PROTECTED]
> Sent: 22 June 2004 10:17
> To: [EMAIL PROTECTED]
> Subject: RE: New to Dates - Plain English Please
> 
> 
> --- Peter Lovatt <[EMAIL PROTECTED]> wrote:
> > Hi
> > 
> > There are two aspects to dates, and your questions
> > include parts of both.
> > 
> > When MySql stores dates you have a choice of column
> > types
> > 
> > http://dev.mysql.com/doc/mysql/en/DATETIME.html
> > 
> > has the details
> > 
> > If you just need to store dates then the DATE type
> > will suit your needs. You
> > just specify it as DATE
> > 
> > When you insert dates you need to insert them in the
> > format -mm-dd ie
> > 2004-12-31
> > 
> > When you retrieve them they default to the same
> > format.
> > 
> > If you wanted to retrieve them in a different format
> > that is when you use
> > DATE_FORMAT(date,format)
> > 
> > eg
> > 
> > INSERT  INTO table1
> > (
> > field1
> > , date_field
> > , field2
> > )
> > VALUES
> > (
> > "some data"
> > , "2004-02-12"
> > , "some more data"
> > );
> > 
> > 
> > then to retrieve the data
> > 
> > SELECT DATE_FORMAT(date_field ,"%M %D %Y")
> > FROM table1
> > 
> > would return "February 12th 2004" instead of
> > "2004-02-12"
> > 
> > Hope this clarifies it..
> > 
> > Come back to me if not
> 
> Can you also print out your dates as February 12th,
> 2004, but order them by 2004-12-08? In other words,
> can you order dates chronologically, even though March
> comes before August alphabetically?
> 
> Thanks.
> 
> 
> __
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: New to Dates - Plain English Please

2004-06-22 Thread David Blomstrom
--- Peter Lovatt <[EMAIL PROTECTED]> wrote:
> Hi
> 
> There are two aspects to dates, and your questions
> include parts of both.
> 
> When MySql stores dates you have a choice of column
> types
> 
> http://dev.mysql.com/doc/mysql/en/DATETIME.html
> 
> has the details
> 
> If you just need to store dates then the DATE type
> will suit your needs. You
> just specify it as DATE
> 
> When you insert dates you need to insert them in the
> format -mm-dd ie
> 2004-12-31
> 
> When you retrieve them they default to the same
> format.
> 
> If you wanted to retrieve them in a different format
> that is when you use
> DATE_FORMAT(date,format)
> 
> eg
> 
> INSERT  INTO table1
> (
> field1
> , date_field
> , field2
> )
> VALUES
> (
> "some data"
> , "2004-02-12"
> , "some more data"
> );
> 
> 
> then to retrieve the data
> 
> SELECT DATE_FORMAT(date_field ,"%M %D %Y")
> FROM table1
> 
> would return "February 12th 2004" instead of
> "2004-02-12"
> 
> Hope this clarifies it..
> 
> Come back to me if not

Can you also print out your dates as February 12th,
2004, but order them by 2004-12-08? In other words,
can you order dates chronologically, even though March
comes before August alphabetically?

Thanks.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LIBMYSQL.dll Not found

2004-06-22 Thread Jon Thompson
Unable to run mysqladmin.exe
Error: This application has failed to start because LIBMYSQL.dll was not 
found. Re-installing the application may fix this problem.

Re-installing does not fix this problem.
Regards,
Yarrokon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Group Query

2004-06-22 Thread shaun thornburgh
Hi,
Thanks for your reply Johan, this works however I actually onley wanted the 
rows where Project_ID = '10'.

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.Rep_ID AND B.Project_ID = '10' GROUP 
BY(U.User_Location);

This query is returning all of the users in the user table regardless of the 
Project_ID...

So what I need is a query that returns a count of bookings for all user 
locations where the Project_ID = 'X'

Thanks again for your help

From: [EMAIL PROTECTED] (Johan Hook)
To: shaun thornburgh <[EMAIL PROTECTED]>
CC: [EMAIL PROTECTED],  [EMAIL PROTECTED]
Subject: Re: Group Query
Date: Tue, 22 Jun 2004 10:56:12 +0200
Shaun,
when you add "WHERE B.Project_ID > = '10'" you, in a way,
change your LEFT JOIN to an INNER JOIN.
You need to do it like:
LEFT OUTER JOIN  Bookings B ON U.User_ID = B.Rep_ID AND B.Project_ID = '8'
/Johan
shaun thornburgh wrote:
Thanks for your reply,
This works great, but when I add a constraint such as WHERE B.Project_ID = 
'10' I dont get the NULL values from the User table which I need:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.Rep_ID WHERE B.Project_ID = '8' GROUP 
BY(U.User_Location);

Any ideas, do I need to do a double join, or do I need to join the 
projects table also?

Thanks for your help

From: "Paul McNeil" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: RE: Group Query
Date: Fri, 18 Jun 2004 08:20:10 -0400
You can use a Left Outer Join.  Left Outer Join will include all that
matches as well as that which doesn't.  The resulting NULL entries for 
the
count will evaluate to 0.

SELECT COUNT(B.Booking_ID), U.User_Location FROM
Users U
LEFT OUTER JOIN
Bookings B
ON
U.User_ID = B.User_ID GROUP BY(U.User_Location);
God Bless
Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.







GOD BLESS AMERICA!
To God Be The Glory!
-Original Message-
From: shaun thornburgh [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 7:54 AM
To: [EMAIL PROTECTED]
Subject: Group Query
Hi,
The following table produces a query that shows all bookings that user 
has
made and groups the number of bookings by the users location code.

mysql> SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users U
WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);
At the momnet if no bookings are made by a particular user then their
location isnt included in the result. Is it possible to modify this query 
so
that if no bookings are made for a particlar location then the location 
is
still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...
Many thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger



--
Johan Höök, Pythagoras Engineering Group
- MailTo:[EMAIL PROTECTED]
- http://www.pythagoras.se
Pythagoras AB, Stormbyv. 2-4, SE-163 55 SPÅNGA, Sweden
Phone: +46 8 760 00 10 Fax: +46 8 761 22 77

_
Express yourself with cool new emoticons http://www.msn.co.uk/specials/myemo
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


R: R: why CPU is high while disks are idle in a table scan???

2004-06-22 Thread Leonardo Francalanci
First: thank you for sparing so much time with me.

> There are 3 major factors determining your performance: 1) the 
> speed of the
> CPU, 2) the size if your RAM, and 3) the data transfer speed of 
> your disks.

I thought that the size of tables in the db would have made a difference.
>From the analysis you did and from my tests it doesn't look so...

> 5) If your application is smart enough, you can split your data over
> several servers. This would be a SERIOUS performance increase.

In fact I'm writing a jdbc driver in java that I hope will be included
in c-jdbc (http://c-jdbc.objectweb.org/), which is a java database
cluster middleware (check it out, it seems very interesting)

> 2) you can move parts of your data to other disks. Using multiple I/O
> channels should be faster than a single channel.

That is what I am trying right now, but I am having problems because
it seems that there is no disk I/O if I access by index on my tables,
even if I access most of the table data (while with a table scan 
disks work as expected).

Does it happen because I have 1Gb of Ram? If yes: I know that this
sounds like a silly question but... How can I "fill" the memory of my
machine so it doesn't cache data? (I can't just remove memory, I don't
have physical access to the machine)


Thank you


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Group Query

2004-06-22 Thread Johan Hook
Shaun,
when you add "WHERE B.Project_ID > = '10'" you, in a way,
change your LEFT JOIN to an INNER JOIN.
You need to do it like:
LEFT OUTER JOIN  Bookings B ON U.User_ID = B.Rep_ID AND B.Project_ID = '8'
/Johan
shaun thornburgh wrote:
Thanks for your reply,
This works great, but when I add a constraint such as WHERE B.Project_ID 
= '10' I dont get the NULL values from the User table which I need:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.Rep_ID WHERE B.Project_ID = '8' GROUP 
BY(U.User_Location);

Any ideas, do I need to do a double join, or do I need to join the 
projects table also?

Thanks for your help

From: "Paul McNeil" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: RE: Group Query
Date: Fri, 18 Jun 2004 08:20:10 -0400
You can use a Left Outer Join.  Left Outer Join will include all that
matches as well as that which doesn't.  The resulting NULL entries for 
the
count will evaluate to 0.

SELECT COUNT(B.Booking_ID), U.User_Location FROM
Users U
LEFT OUTER JOIN
Bookings B
ON
U.User_ID = B.User_ID GROUP BY(U.User_Location);
God Bless
Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.







GOD BLESS AMERICA!
To God Be The Glory!
-Original Message-
From: shaun thornburgh [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 7:54 AM
To: [EMAIL PROTECTED]
Subject: Group Query
Hi,
The following table produces a query that shows all bookings that user 
has
made and groups the number of bookings by the users location code.

mysql> SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users U
WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);
At the momnet if no bookings are made by a particular user then their
location isnt included in the result. Is it possible to modify this 
query so
that if no bookings are made for a particlar location then the 
location is
still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...
Many thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger



--
Johan Höök, Pythagoras Engineering Group
- MailTo:[EMAIL PROTECTED]
- http://www.pythagoras.se
Pythagoras AB, Stormbyv. 2-4, SE-163 55 SPÅNGA, Sweden
Phone: +46 8 760 00 10 Fax: +46 8 761 22 77

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: New to Dates - Plain English Please

2004-06-22 Thread Peter Lovatt
Hi

There are two aspects to dates, and your questions include parts of both.

When MySql stores dates you have a choice of column types

http://dev.mysql.com/doc/mysql/en/DATETIME.html

has the details

If you just need to store dates then the DATE type will suit your needs. You
just specify it as DATE

When you insert dates you need to insert them in the format -mm-dd ie
2004-12-31

When you retrieve them they default to the same format.

If you wanted to retrieve them in a different format that is when you use
DATE_FORMAT(date,format)

eg

INSERT  INTO table1
(
field1
, date_field
, field2
)
VALUES
(
"some data"
, "2004-02-12"
, "some more data"
);


then to retrieve the data

SELECT DATE_FORMAT(date_field ,"%M %D %Y")
FROM table1

would return "February 12th 2004" instead of "2004-02-12"

Hope this clarifies it..

Come back to me if not

Peter









> -Original Message-
> From: David Blomstrom [mailto:[EMAIL PROTECTED]
> Sent: 22 June 2004 07:40
> To: [EMAIL PROTECTED]
> Subject: New to Dates - Plain English Please
>
>
> I  haven't worked with dates yet and wondered if
> someone could give me an overview in plain English.
>
> At the moment, I'm working on a table with a column of
> dates in this format:
>
> March 2, 2003
> July 7, 2004
>
> If I understand the Manual, the correct format for
> these dates in a MySQL table would be like this:
>
> 03-02-2003
> 07-07-2004
>
> Am I right?
>
> When I create a DATE field, do I need to designate a
> Length/Value, Attribute, etc.? If so, how long is
> 07-07-2004 - 8 characters, or ten?
>
> And is it also correct that you can manipulate dates
> in this format to display in other formats? For
> example, 03-02-2003 could be displayed as March 2,
> 2003?
>
> I THINK this is what the Manual is talking about
> here...
>
> DATE_FORMAT(date,format)
> Formats the date value according to the format
> string. The following specifiers may be used in the
> format string:
> Specifier Description
> %aAbbreviated weekday name (Sun..Sat)
> %bAbbreviated month name (Jan..Dec)
> %cMonth, numeric (0..12)
> %DDay of the month with English suffix (0th,
> 1st, 2nd, 3rd, ...)
> %dDay of the month, numeric (00..31)
> %eDay of the month, numeric (0..31)
>
> If so, this must be something you do with PHP, right?
>
> I guess the most important thingn I need to know is
> what format to put my dates in. If 07-07-2004 is the
> standard format, and if I can print/display that in
> various ways, then I can figure the rest out when I
> get to it.
>
> Thanks.
>
>
>
> __
> Do you Yahoo!?
> New and Improved Yahoo! Mail - Send 10MB messages!
> http://promotions.yahoo.com/new_mail
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Group Query

2004-06-22 Thread shaun thornburgh
Thanks for your reply,
This works great, but when I add a constraint such as WHERE B.Project_ID = 
'10' I dont get the NULL values from the User table which I need:

SELECT COUNT(B.Booking_ID), U.User_Location FROM Users U LEFT OUTER JOIN 
Bookings B ON U.User_ID = B.Rep_ID WHERE B.Project_ID = '8' GROUP 
BY(U.User_Location);

Any ideas, do I need to do a double join, or do I need to join the projects 
table also?

Thanks for your help

From: "Paul McNeil" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: RE: Group Query
Date: Fri, 18 Jun 2004 08:20:10 -0400
You can use a Left Outer Join.  Left Outer Join will include all that
matches as well as that which doesn't.  The resulting NULL entries for the
count will evaluate to 0.
SELECT COUNT(B.Booking_ID), U.User_Location FROM
Users U
LEFT OUTER JOIN
Bookings B
ON
U.User_ID = B.User_ID GROUP BY(U.User_Location);
God Bless
Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.







GOD BLESS AMERICA!
To God Be The Glory!
-Original Message-
From: shaun thornburgh [mailto:[EMAIL PROTECTED]
Sent: Friday, June 18, 2004 7:54 AM
To: [EMAIL PROTECTED]
Subject: Group Query
Hi,
The following table produces a query that shows all bookings that user has
made and groups the number of bookings by the users location code.
mysql> SELECT COUNT(B.Booking_ID), User_Location FROM Bookings B, Users U
WHERE U.User_ID = B.User_ID GROUP BY(U.User_Location);
At the momnet if no bookings are made by a particular user then their
location isnt included in the result. Is it possible to modify this query 
so
that if no bookings are made for a particlar location then the location is
still included in the result i.e.

+-+---+
| COUNT(B.Booking_ID) | User_Location |
+-+---+
|   1 | 01|
|   8 | 02  |
|   9 | 03  |
|   1 | 04   |
|   1 | 05  |
|   2 | 06   |
|   1 | 07  |
...
Many thanks for your help
_
It's fast, it's easy and it's free. Get MSN Messenger today!
http://www.msn.co.uk/messenger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

_
It's fast, it's easy and it's free. Get MSN Messenger today! 
http://www.msn.co.uk/messenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]