Returning All Rows That Repeat

2007-07-29 Thread John Kopanas
I want to be able to return all rows that have one or more other rows
with the same customer_number and job_number.

So for instance.  If their are two jobs in my jobs table with a
customer_number = '0123' and job_number ='12' then I want both of
those jobs to return one right after another so I can compare their
other fields.  And I want to run a query once a day over the whole
table to see if their are any repeats.

Does anyone have a clue how to do this?

Thanks :-)

Your Friend,

John

-- 
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Returning All Rows That Repeat

2007-07-29 Thread John Kopanas
Does it makes sense that on a table of 100,000 rows that my DB is
crapping out with the following query?

SELECT * FROM jobs GROUP BY customer_number, job_number HAVING
count(*)  1 ORDER BY customer_number;

:-)

On 7/29/07, John Trammell [EMAIL PROTECTED] wrote:
  From: John Kopanas [EMAIL PROTECTED]
  Subject: Returning All Rows That Repeat
 
  I want to be able to return all rows that have one or more other rows
  with the same customer_number and job_number.
 
  So for instance.  If their are two jobs in my jobs table with a
  customer_number = '0123' and job_number ='12' then I want both of
  those jobs to return one right after another so I can compare their
  other fields.  And I want to run a query once a day over the whole
  table to see if their are any repeats.

 One way would be a self-join, e.g.:

 SELECT *
 FROM mytable t1, mytable t2-- same table twice
 WHERE t1.customer_number = t2.customer_number-- with same cust. no.
 AND t1.job_number = t2.job_number-- and same job no.
 AND t1.id  t2.id;-- but the records are distinct

 INFORMATION IN THIS MESSAGE, INCLUDING ANY ATTACHMENTS, IS INTENDED FOR THE 
 PERSONAL AND CONFIDENTIAL USE OF THE INTENDED RECIPIENT(S) NAMED ABOVE. If 
 you are not an intended recipient of this message, or an agent responsible 
 for delivering it to an intended recipient, you are hereby notified that you 
 have received this message in error, and that any review, dissemination, 
 distribution, or copying of this message is strictly prohibited. If you 
 received this message in error, please notify the sender immediately, delete 
 the message, and return any hard copy print-outs.



-- 
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Exporting Data To File and Can't Create/Write to File

2007-05-28 Thread John Kopanas

I am doing the following:

echo 'SELECT * FROM jobs WHERE id = 1 INTO OUTFILE
/Users/jkopanas/Sites/test_this_out.csv;' | mysql -uroot
backlog_development

And I get the following error:
ERROR 1 (HY000) at line 1: Can't create/write to file
'/Users/jkopanas/Sites/test_this_out.csv' (Errcode: 13)

I found this on mysql:
http://dev.mysql.com/doc/refman/5.0/en/cannot-create.html

I checked to see if root had File_priv and he/she does.  I am
developing this on Mac OS X but I am deploying on a linux box.

Any input? :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Exporting Data To File and Can't Create/Write to File

2007-05-28 Thread John Kopanas

Directory existed but I did not have the right permissions on the
directory for the mysql user.  Thanks my friend. :-)

On 5/28/07, Baron Schwartz [EMAIL PROTECTED] wrote:

Hi,

John Kopanas wrote:
 I am doing the following:

 echo 'SELECT * FROM jobs WHERE id = 1 INTO OUTFILE
 /Users/jkopanas/Sites/test_this_out.csv;' | mysql -uroot
 backlog_development

 And I get the following error:
 ERROR 1 (HY000) at line 1: Can't create/write to file
 '/Users/jkopanas/Sites/test_this_out.csv' (Errcode: 13)

 I found this on mysql:
 http://dev.mysql.com/doc/refman/5.0/en/cannot-create.html

 I checked to see if root had File_priv and he/she does.  I am
 developing this on Mac OS X but I am deploying on a linux box.

Does /Users/jkopanas/Sites/ exist?  Does the user MySQL runs as have
permission to write to it?

Baron




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why Is Autonumber Not Working With Following Query

2007-04-12 Thread John Kopanas

I have the following query:

INSERT INTO summaries (entity_name, job_type, job_status)
(SELECT entity_name, GOV, Non-Workable FROM tmpSummaryTable);

The only other column I have is id and it is primary key autonumber int.

Why would I get the following error?

ERROR 1062 (23000) at line 3: Duplicate entry '5' for key 1

Why is the query trying to enter a duplicate entry?

This makes no sense to me :-).

Thanks

John

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why Is Autonumber Not Working With Following Query

2007-04-12 Thread John Kopanas

Wow, I am a little tired... this is the query causing me the problem...

INSERT INTO jobs
(SELECT * FROM tmpJobs WHERE NOT EXISTS
(SELECT * FROM jobs WHERE jobs.customer_number =
tmpJobs.customer_number and jobs.job_number = tmpJobs.job_number));

Why is the id not creating distinct #s?

On 4/12/07, John Kopanas [EMAIL PROTECTED] wrote:

I have the following query:

INSERT INTO summaries (entity_name, job_type, job_status)
(SELECT entity_name, GOV, Non-Workable FROM tmpSummaryTable);

The only other column I have is id and it is primary key autonumber int.

Why would I get the following error?

ERROR 1062 (23000) at line 3: Duplicate entry '5' for key 1

Why is the query trying to enter a duplicate entry?

This makes no sense to me :-).

Thanks

John

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How can I do something like this in mySQL...

2007-04-08 Thread John Kopanas

Sweet... thanks guys... it is working well now :-).

On 4/8/07, Mogens Melander [EMAIL PROTECTED] wrote:

Well, maybe you want to read up on isnull() and case (..) in the manual.
It's in there, somewhere.

mysql SELECT CASE 1 WHEN 1 THEN 'one'
- WHEN 2 THEN 'two' ELSE 'more' END;
- 'one'
mysql SELECT CASE WHEN 10 THEN 'true' ELSE 'false' END;
- 'true'
mysql SELECT CASE BINARY 'B'
- WHEN 'a' THEN 1 WHEN 'b' THEN 2 END;
- NULL

IF(expr1,expr2,expr3)

If expr1 is TRUE (expr1  0 and expr1  NULL) then IF() returns expr2;
otherwise it returns expr3. IF() returns a numeric or string value,
depending on the context in which it is used.

mysql SELECT IF(12,2,3);
- 3
mysql SELECT IF(12,'yes','no');
- 'yes'
mysql SELECT IF(STRCMP('test','test1'),'no','yes');
- 'no'

IFNULL(expr1,expr2)

If expr1 is not NULL, IFNULL() returns expr1; otherwise it returns expr2.
IFNULL() returns a numeric or string value, depending on the context in
which it is used.

mysql SELECT IFNULL(1,0);
- 1
mysql SELECT IFNULL(NULL,10);
- 10
mysql SELECT IFNULL(1/0,10);
- 10
mysql SELECT IFNULL(1/0,'yes');
- 'yes'


--
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224


On Sat, April 7, 2007 20:19, John Kopanas wrote:
 I have a query that looks something like this:

 SELECT (c_o_w_inst_rev - c_o_w_estcost)/c_o_w_inst_rev
 FROM tmpGovernmentSummaries

 The problem is that sometimes c_o_w_inst_rev is 0 and dividing by zero
 returns a NULL.

 If c_o_w_inst_rev == 0 how can I return 0 for the SELECT above instead
 of NULL?  Can I test in the SELECT if c_o_w_inst_rev is 0 and return 0
 and if not do the math?

 Insight would be greatly appreciated :-)


 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


 --
 This message has been scanned for viruses and
 dangerous content by OpenProtect(http://www.openprotect.com), and is
 believed to be clean.




--
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How can I do something like this in mySQL...

2007-04-07 Thread John Kopanas

I have a query that looks something like this:

SELECT (c_o_w_inst_rev - c_o_w_estcost)/c_o_w_inst_rev
FROM tmpGovernmentSummaries

The problem is that sometimes c_o_w_inst_rev is 0 and dividing by zero
returns a NULL.

If c_o_w_inst_rev == 0 how can I return 0 for the SELECT above instead
of NULL?  Can I test in the SELECT if c_o_w_inst_rev is 0 and return 0
and if not do the math?

Insight would be greatly appreciated :-)


--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding a Solution To A Distinct Problem of Mine

2007-03-13 Thread John Kopanas

Ahh... sweet... makes sense... thanks! :-)

On 3/12/07, Brent Baisley [EMAIL PROTECTED] wrote:

If you are looking for the latest created_at date, then you want to be grabbing 
the max value of that field.

SELECT DISTINCT from_user_id, max(created_at) FROM messages WHERE to_user_id
= 1 GROUP BY from_user_id;

In your original query I think you meant to select from, not to, since to will 
be 1;


- Original Message -
From: John Kopanas [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Sunday, March 11, 2007 12:59 PM
Subject: Re: Finding a Solution To A Distinct Problem of Mine


I think I got it:

 SELECT * FROM (SELECT * FROM messages ORDER BY created_at DESC) as
 messages WHERE to_user_id = 1 GROUP BY from_user_id;

 Is this the best way about it... or are their better ways you suggest?

 On 3/11/07, John Kopanas [EMAIL PROTECTED] wrote:
 I have the following table:

 messages:
   if:
   from_user_id:
   to_user_id:
   body:
   created_at:
   updated_at:

 I have to return all the rows that have a distinct from_user_id based
 on a to_user_id.  Not only that but I want to return the rows with the
 newest created_at.

 I thought this would work:
 SELECT DISTINCT to_user_id, created_at FROM messages WHERE to_user_id
 = 1 ORDER BY created_at;

 But the problem is that I only get distincts when I only have
 to_user_id in the SELECT clause.  Any suggestions.

 I need to return everything on the latest row that has a distinct
 from_user_id :-).

 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Finding a Solution To A Distinct Problem of Mine

2007-03-11 Thread John Kopanas

I have the following table:

messages:
 if:
 from_user_id:
 to_user_id:
 body:
 created_at:
 updated_at:

I have to return all the rows that have a distinct from_user_id based
on a to_user_id.  Not only that but I want to return the rows with the
newest created_at.

I thought this would work:
SELECT DISTINCT to_user_id, created_at FROM messages WHERE to_user_id
= 1 ORDER BY created_at;

But the problem is that I only get distincts when I only have
to_user_id in the SELECT clause.  Any suggestions.

I need to return everything on the latest row that has a distinct
from_user_id :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding a Solution To A Distinct Problem of Mine

2007-03-11 Thread John Kopanas

I think I got it:

SELECT * FROM (SELECT * FROM messages ORDER BY created_at DESC) as
messages WHERE to_user_id = 1 GROUP BY from_user_id;

Is this the best way about it... or are their better ways you suggest?

On 3/11/07, John Kopanas [EMAIL PROTECTED] wrote:

I have the following table:

messages:
  if:
  from_user_id:
  to_user_id:
  body:
  created_at:
  updated_at:

I have to return all the rows that have a distinct from_user_id based
on a to_user_id.  Not only that but I want to return the rows with the
newest created_at.

I thought this would work:
SELECT DISTINCT to_user_id, created_at FROM messages WHERE to_user_id
= 1 ORDER BY created_at;

But the problem is that I only get distincts when I only have
to_user_id in the SELECT clause.  Any suggestions.

I need to return everything on the latest row that has a distinct
from_user_id :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Finding a Solution To A Distinct Problem of Mine

2007-03-11 Thread John Kopanas

Ok, did not work... hmmm... any other suggestions?

On 3/11/07, John Kopanas [EMAIL PROTECTED] wrote:

I think I got it:

SELECT * FROM (SELECT * FROM messages ORDER BY created_at DESC) as
messages WHERE to_user_id = 1 GROUP BY from_user_id;

Is this the best way about it... or are their better ways you suggest?

On 3/11/07, John Kopanas [EMAIL PROTECTED] wrote:
 I have the following table:

 messages:
   if:
   from_user_id:
   to_user_id:
   body:
   created_at:
   updated_at:

 I have to return all the rows that have a distinct from_user_id based
 on a to_user_id.  Not only that but I want to return the rows with the
 newest created_at.

 I thought this would work:
 SELECT DISTINCT to_user_id, created_at FROM messages WHERE to_user_id
 = 1 ORDER BY created_at;

 But the problem is that I only get distincts when I only have
 to_user_id in the SELECT clause.  Any suggestions.

 I need to return everything on the latest row that has a distinct
 from_user_id :-).

 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Easiest Way To Replicate DB

2007-01-25 Thread John Kopanas

I am doing tests so I want to easy take my DB and make a full copy of
it into a test db everytime I want to test something against the
non-produciton version of DB.  What is the easiest way to do this.

So I have a DB called backlog and I want to copy it's structure and
data into backlog_test with ease :-).  Any sexy suggetions? :-)

Your Friend,

John

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: Having Problem With Load Files and AutoNumber id fields

2006-11-27 Thread John Kopanas

The empty field did not work but the 0 worked beautifully!  Thanks everyone! :-)

On 11/27/06, Jerry Schwartz [EMAIL PROTECTED] wrote:

Perhaps that empty field is being interpreted as a string?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


 -Original Message-
 From: John Kopanas [mailto:[EMAIL PROTECTED]
 Sent: Friday, November 24, 2006 11:13 AM
 To: mysql@lists.mysql.com
 Subject: Having Problem With Load Files and AutoNumber id fields

 Argh...  I am autogenerating a load file after parsing and massaging a
 text file I get from an AS/400 system.  My first field is an id that
 is on auto_number so I put a tab at the begging of the line for each
 one of my records... it throws a warning for each line... if instead
 of putting an autonumber I put a random number it works perfectly.
 How can I get it to work?

 Sample line in my load file that works:
 1 306600121001001467  SMART ALARMS
 INC   201
 MILLWAY AVE UNIT #8   \N  \N  VAUGHAN ON  L4K 5K8
 0 1999-9-1
 00:00:00  1999-9-1 00:00:00

 Sample line that does not work:
   306600121001001467  SMART ALARMS INC201
 MILLWAY AVE UNIT
 #8\N  \N  VAUGHAN ON  L4K 5K8 0
 1999-9-1 00:00:00 1999-9-1 00:00:00

 My code that generates the sample line that does not work:
 tempfile.puts
 \t#{record[0,22].strip}\t#{record[22,32].strip}\t#{record[54,
 32].strip}\t#{record[86,32].strip}\t#{record[118,32].strip}\t#
 {record[150,30].strip}\t#{record[180,2].strip}\t#{record[182,7
 ].strip}\t0\t2000-01-01
 00:00:00\t2000-01-01 00:00:00

 How can I get mysql to put the auto_numbers in for me?  Why is it even
 accepting my numbers?

 Thanks :-)

 Does anyone also know how I can see the warnings created by loadfile?
 All I just see is the summary with skipped and warnings.

 Thanks :-)
 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]









--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

Thanks a lot for your help.

The query should and only does return 1-6 rows depending on the id.
Never more then that.  Here are the comperative EXPLAINs:

mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
++-++---+---+-+-+---+--+---+
| id | select_type | table  | type  | possible_keys | key
  | key_len | ref   | rows | Extra |
++-++---+---+-+-+---+--+---+
|  1 | SIMPLE  | purchased_services | const | PRIMARY   |
PRIMARY |   4 | const |1 |   |
++-++---+---+-+-+---+--+---+
1 row in set (0.00 sec)

mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
++-++--+-+--+-+--++-+
| id | select_type | table  | type | possible_keys
 | key  | key_len | ref  | rows   | Extra   |
++-++--+-+--+-+--++-+
|  1 | SIMPLE  | purchased_services | ALL  |
purchased_services_company_id_index | NULL |NULL | NULL | 626188 |
Using where |
++-++--+-+--+-+--++-+
1 row in set (0.00 sec)

Here is the explain for the SELECT COUNT(id)

mysql EXPLAIN SELECT count(id) FROM companies;
++-+---+---+---+-+-+--++-+
| id | select_type | table | type  | possible_keys | key |
key_len | ref  | rows   | Extra   |
++-+---+---+---+-+-+--++-+
|  1 | SIMPLE  | companies | index | NULL  | PRIMARY |
 4 | NULL | 533821 | Using index |
++-+---+---+---+-+-+--++-+
1 row in set (0.10 sec)

The explain takes a fraction of a second and returns the amound of row
plus some.  But when I just as for the count it took me 5 seconds.
Something is broken.

My innodb_buffer_pool_size is:
innodb_buffer_pool_size | 8388608

That looks like 8MB... that sounds small if I have a DB with over 1M
rows to process.  No?

Thanks again for your help.

Your Friend,

John


On 11/25/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 25), John Kopanas said:
 Sorry about these questions.  I am used to working with DBs with less
 then 10K rows and now I am working with tables with over 500K rows
 which seems to be changing a lot for me.  I was hoping I can get some
 people's advice.

 I have a 'companies' table with over 500K rows and a
 'purchased_services' table with over 650K rows.

 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

How many rows does it return, and what's its explain plan?

 purchased_services.company_id has an index on it.

 The following query takes over 3 seconds:
 SELECT count(id) FROM companies;

An explain plan here would be useful too.

 To me the time it takes to run these queries makes no sense.  I would
 imagine both of these queries would take a fraction of a second.

 When running some of these queries and looking at 'SHOW processlist'
 I was getting a lot of 'Writing to tmp'.

That doesn't make much sense, since both queries should simply be doing
index scans (or full table scans depending on how many rows are
expected to match in the first query, but the explain plans will show
that).

 My DB engine is InnoDB.  I am running this on my laptop that is a
 PowerBook 867 with 756 MB of Ram.

What's your innodb_buffer_pool_size set to?

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

The application is not in production yet but when it will go in
production the server will be considerably faster and have much more
RAM.  But before I put the app in production I want to make sure it is
working properly.  500K rows does not sounds like that much in this
day in age.  If I understand what is going on on my laptop I will be
confident it will work well in production.

:-)

On 11/25/06, Dan Buettner [EMAIL PROTECTED] wrote:

This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop.  You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers.  If your query has to access multiple
records spread out throughout the table off a slow laptop disk, this
makes sense to me.

Do you normally run this database on a real server in production?
Is response time better?

Still, it does seem a bit slow, even for an 867 MHz laptop, and seeing
the output of an EXPLAIN might be helpful.

Also, to answer your question about the speed of selecting the count
of rows in a table - MyISAM always knows exactly how many rows are in
a table, so it answers quickly without checking the data.  InnoDB does
not keep track of how many rows are in a table, so it has to go count
them when you do a SELECT COUNT(*) or in your case SELECT COUNT(id).
That takes a little bit of time.

Dan


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 If I just SELECT id:
 SELECT id FROM purchased_services WHERE (company_id = 1000)

 It takes approx 2-2.5s.  When I look at the process list it looks like
 that it's state seems to always be in sending data...

 This is after killing the db and repopulating it again.  So what is going on?

 On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
  I tried the same tests with the database replicated in a MyISAM
  engine.  The count was instantaneous but the following still took
  3-6seconds:
 
  SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
 
  The following though was instantaneous:
 
  SELECT * FROM purchased_services WHERE (id = 1000)
 
  This is the result from my SHOW INDEX FROM purchased_services:
 
  
+++-+--+-+---+-+--++--++-+
  | Table  | Non_unique | Key_name
   | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
  Packed | Null | Index_type | Comment |
  
+++-+--+-+---+-+--++--++-+
  | purchased_services |  0 | PRIMARY
   |1 | id  | A |  627546 | NULL |
  NULL   |  | BTREE  | |
  | purchased_services |  1 |
  purchased_services_company_id_index |1 | company_id  | A
|  627546 | NULL | NULL   | YES  | BTREE  |
  |
  
+++-+--+-+---+-+--++--++-+
 
  So I do have the necessary index.  I am so confused  Argh...
 
  Your Friend,
 
  John
 
 
  On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
   Sorry about these questions.  I am used to working with DBs with less
   then 10K rows and now I am working with tables with over 500K rows
   which seems to be changing a lot for me.  I was hoping I can get some
   people's advice.
  
   I have a 'companies' table with over 500K rows and a
   'purchased_services' table with over 650K rows.
  
   The following query takes over 6 seconds:
   SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
  
   purchased_services.company_id has an index on it.
  
   The following query takes over 3 seconds:
   SELECT count(id) FROM companies;
  
   To me the time it takes to run these queries makes no sense.  I would
   imagine both of these queries would take a fraction of a second.
  
   When running some of these queries and looking at 'SHOW processlist' I
   was getting a lot of 'Writing to tmp'.
  
   My DB engine is InnoDB.  I am running this on my laptop that is a
   PowerBook 867 with 756 MB of Ram.
  
   Feedback and insight would be greatly appreciated.
  
   Thanks my friends! :-)
  
   --
   John Kopanas
   [EMAIL PROTECTED]
  
   http://www.kopanas.com
   http://www.cusec.net
   http://www.soen.info
  
 
 
  --
  John Kopanas
  [EMAIL PROTECTED]
 
  http://www.kopanas.com
  http://www.cusec.net
  http://www.soen.info
 


 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
John Kopanas
[EMAIL PROTECTED]

http

Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

When I did a:

SELECT * FROM purchased_services WHERE company_id = 1000;

It took me 7 seconds.  This is driving me crazy!

I am going to have to try this on another computer and see if I am
going to get the same results on another system.   Argh...

On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 26), John Kopanas said:
 Thanks a lot for your help.

 The query should and only does return 1-6 rows depending on the id.
 Never more then that.  Here are the comperative EXPLAINs:

 mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
 
++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
key_len | ref   | rows | Extra |
 
++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY |   
4 | const |1 |   |
 
++-++---+---+-+-+---+--+---+
 1 row in set (0.00 sec)

This query definitly should run almost instantly, since it looks like a
direct lookup on the primary key.

 mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
 
++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys  | 
key  | key_len | ref  | rows   | Extra   |
 
++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
purchased_services_company_id_index | NULL |NULL | NULL | 626188 | Using where 
|
 
++-++--+-+--+-+--++-+
 1 row in set (0.00 sec)

This EXPLAIN indicates that mysql thinks that the query would match
~600k rows and will do a full table scan.  Mysql only keeps a single
cardinality value for each index that estimates how many records have
a unique value in the index.  This can cause problems for the optimizer
if you have one value for say 60% of the table, and unique values for
the rest.  You can try adding a FORCE INDEX clause to the query and see
if that helps.

 Here is the explain for the SELECT COUNT(id)

 mysql EXPLAIN SELECT count(id) FROM companies;
 
++-+---+---+---+-+-+--++-+
 | id | select_type | table | type  | possible_keys | key | key_len | 
ref  | rows   | Extra   |
 
++-+---+---+---+-+-+--++-+
 |  1 | SIMPLE  | companies | index | NULL  | PRIMARY |  4 | NULL 
| 533821 | Using index |
 
++-+---+---+---+-+-+--++-+
 1 row in set (0.10 sec)

 The explain takes a fraction of a second and returns the amound of row
 plus some.  But when I just as for the count it took me 5 seconds.
 Something is broken.

Note that a primary index scan on an InnoDB table really is a full
table scan.  Try creating another index on just the id field and force
mysql to use it with a FORCE INDEX clause.  Innodb's query optimizer
will always prefer the primary index even if the secondary is smaller,
which is why you have to force it here.  The row count in the estimate
is off because Innodb's query optimizer doesn't know the exact row
count and has to guess.

 My innodb_buffer_pool_size is:
 innodb_buffer_pool_size | 8388608

 That looks like 8MB... that sounds small if I have a DB with over 1M
 rows to process.  No?

Yes, that's extremely small.  I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

Yes... with FORCE INDEX it still takes 7 seconds.

On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 26), John Kopanas said:
 On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Nov 26), John Kopanas said:
  Thanks a lot for your help.
 
  The query should and only does return 1-6 rows depending on the id.
  Never more then that.  Here are the comperative EXPLAINs:
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
 
 
++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
key_len | ref   | rows | Extra |
 
++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY | 
  4 | const |1 |   |
 
++-++---+---+-+-+---+--+---+
  1 row in set (0.00 sec)
 
 This query definitly should run almost instantly, since it looks like a
 direct lookup on the primary key.
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
 
 
++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys   | 
key  | key_len | ref  | rows   | Extra   |
 
++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
purchased_services_company_id_index | NULL |NULL | NULL | 626188 | Using where |
 
++-++--+-+--+-+--++-+
  1 row in set (0.00 sec)
 
 This EXPLAIN indicates that mysql thinks that the query would match
 ~600k rows and will do a full table scan.  Mysql only keeps a single
 cardinality value for each index that estimates how many records have
 a unique value in the index.  This can cause problems for the optimizer
 if you have one value for say 60% of the table, and unique values for
 the rest.  You can try adding a FORCE INDEX clause to the query and see
 if that helps.

 When I did a:

 SELECT * FROM purchased_services WHERE company_id = 1000;

 It took me 7 seconds.  This is driving me crazy!

 I am going to have to try this on another computer and see if I am
 going to get the same results on another system.  Argh...

Yes, a full table scan on 500k rows may very well take 7 seconds.  Did
you try with a FORCE INDEX (purchased_services_company_id) clause?

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Table of type=memory is full... but not

2006-11-25 Thread John Kopanas

I create a memory table with the following query:

 CREATE TABLE company_totals type=memory
 SELECT company_id id, SUM(annual_service_charge) service_charge
 FROM purchased_services ps
 GROUP BY company_id;)

When I try this I get the following error:

Mysql::Error: The table 'company_totals' is full:

So I decided to see what is in the table:

SELECT * FROM company_totals;

And I get the following error:

ERROR 1146 (42S02): Table 'opportunity_development.company_totals' doesn't exist

Even when I do a: SHOW TABLE STATUS the table is not listed.

So, where is this table?  Why does mySQL think it is full but does not
think it exists at the same time?  Help :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table of type=memory is full... but not

2006-11-25 Thread John Kopanas

When I moved from type=memory to the default DB it worked fine.  I am
starting to think that the quantity of rows i.e. 550K approx was too
much for my memory on my computer to handle.  Does this make sense?

On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

I create a memory table with the following query:

  CREATE TABLE company_totals type=memory
  SELECT company_id id, SUM(annual_service_charge) service_charge
  FROM purchased_services ps
  GROUP BY company_id;)

When I try this I get the following error:

Mysql::Error: The table 'company_totals' is full:

So I decided to see what is in the table:

SELECT * FROM company_totals;

And I get the following error:

ERROR 1146 (42S02): Table 'opportunity_development.company_totals' doesn't exist

Even when I do a: SHOW TABLE STATUS the table is not listed.

So, where is this table?  Why does mySQL think it is full but does not
think it exists at the same time?  Help :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

Sorry about these questions.  I am used to working with DBs with less
then 10K rows and now I am working with tables with over 500K rows
which seems to be changing a lot for me.  I was hoping I can get some
people's advice.

I have a 'companies' table with over 500K rows and a
'purchased_services' table with over 650K rows.

The following query takes over 6 seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

purchased_services.company_id has an index on it.

The following query takes over 3 seconds:
SELECT count(id) FROM companies;

To me the time it takes to run these queries makes no sense.  I would
imagine both of these queries would take a fraction of a second.

When running some of these queries and looking at 'SHOW processlist' I
was getting a lot of 'Writing to tmp'.

My DB engine is InnoDB.  I am running this on my laptop that is a
PowerBook 867 with 756 MB of Ram.

Feedback and insight would be greatly appreciated.

Thanks my friends! :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

I tried the same tests with the database replicated in a MyISAM
engine.  The count was instantaneous but the following still took
3-6seconds:

SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

The following though was instantaneous:

SELECT * FROM purchased_services WHERE (id = 1000)

This is the result from my SHOW INDEX FROM purchased_services:

+++-+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name
| Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
Packed | Null | Index_type | Comment |
+++-+--+-+---+-+--++--++-+
| purchased_services |  0 | PRIMARY
|1 | id  | A |  627546 | NULL |
NULL   |  | BTREE  | |
| purchased_services |  1 |
purchased_services_company_id_index |1 | company_id  | A
 |  627546 | NULL | NULL   | YES  | BTREE  |
|
+++-+--+-+---+-+--++--++-+

So I do have the necessary index.  I am so confused  Argh...

Your Friend,

John


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

Sorry about these questions.  I am used to working with DBs with less
then 10K rows and now I am working with tables with over 500K rows
which seems to be changing a lot for me.  I was hoping I can get some
people's advice.

I have a 'companies' table with over 500K rows and a
'purchased_services' table with over 650K rows.

The following query takes over 6 seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

purchased_services.company_id has an index on it.

The following query takes over 3 seconds:
SELECT count(id) FROM companies;

To me the time it takes to run these queries makes no sense.  I would
imagine both of these queries would take a fraction of a second.

When running some of these queries and looking at 'SHOW processlist' I
was getting a lot of 'Writing to tmp'.

My DB engine is InnoDB.  I am running this on my laptop that is a
PowerBook 867 with 756 MB of Ram.

Feedback and insight would be greatly appreciated.

Thanks my friends! :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)

It takes approx 2-2.5s.  When I look at the process list it looks like
that it's state seems to always be in sending data...

This is after killing the db and repopulating it again.  So what is going on?

On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

I tried the same tests with the database replicated in a MyISAM
engine.  The count was instantaneous but the following still took
3-6seconds:

SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

The following though was instantaneous:

SELECT * FROM purchased_services WHERE (id = 1000)

This is the result from my SHOW INDEX FROM purchased_services:

+++-+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name
 | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
Packed | Null | Index_type | Comment |
+++-+--+-+---+-+--++--++-+
| purchased_services |  0 | PRIMARY
 |1 | id  | A |  627546 | NULL |
NULL   |  | BTREE  | |
| purchased_services |  1 |
purchased_services_company_id_index |1 | company_id  | A
  |  627546 | NULL | NULL   | YES  | BTREE  |
|
+++-+--+-+---+-+--++--++-+

So I do have the necessary index.  I am so confused  Argh...

Your Friend,

John


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 Sorry about these questions.  I am used to working with DBs with less
 then 10K rows and now I am working with tables with over 500K rows
 which seems to be changing a lot for me.  I was hoping I can get some
 people's advice.

 I have a 'companies' table with over 500K rows and a
 'purchased_services' table with over 650K rows.

 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

 purchased_services.company_id has an index on it.

 The following query takes over 3 seconds:
 SELECT count(id) FROM companies;

 To me the time it takes to run these queries makes no sense.  I would
 imagine both of these queries would take a fraction of a second.

 When running some of these queries and looking at 'SHOW processlist' I
 was getting a lot of 'Writing to tmp'.

 My DB engine is InnoDB.  I am running this on my laptop that is a
 PowerBook 867 with 756 MB of Ram.

 Feedback and insight would be greatly appreciated.

 Thanks my friends! :-)

 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Having Problem With Load Files and AutoNumber id fields

2006-11-24 Thread John Kopanas

Argh...  I am autogenerating a load file after parsing and massaging a
text file I get from an AS/400 system.  My first field is an id that
is on auto_number so I put a tab at the begging of the line for each
one of my records... it throws a warning for each line... if instead
of putting an autonumber I put a random number it works perfectly.
How can I get it to work?

Sample line in my load file that works:
1   306600121001001467  SMART ALARMS INC201
MILLWAY AVE UNIT #8 \N  \N  VAUGHAN ON  L4K 5K8 0   1999-9-1
00:00:001999-9-1 00:00:00

Sample line that does not work:
306600121001001467  SMART ALARMS INC201 MILLWAY AVE UNIT
#8  \N  \N  VAUGHAN ON  L4K 5K8 0   1999-9-1 00:00:00   
1999-9-1 00:00:00

My code that generates the sample line that does not work:
tempfile.puts 
\t#{record[0,22].strip}\t#{record[22,32].strip}\t#{record[54,32].strip}\t#{record[86,32].strip}\t#{record[118,32].strip}\t#{record[150,30].strip}\t#{record[180,2].strip}\t#{record[182,7].strip}\t0\t2000-01-01
00:00:00\t2000-01-01 00:00:00

How can I get mysql to put the auto_numbers in for me?  Why is it even
accepting my numbers?

Thanks :-)

Does anyone also know how I can see the warnings created by loadfile?
All I just see is the summary with skipped and warnings.

Thanks :-)
--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance Question And Problem

2006-11-23 Thread John Kopanas

I have the following query:

 UPDATE companies c
 SET
   total_annual_service_charge =
 (
   SELECT SUM(annual_service_charge)
   FROM purchased_services ps WHERE ps.company_id = c.id
 );


It takes 1s to run when I have two tables of 500 rows, 4s with two
tables of 1000 rows, 15s to run with two tables to run with 2000 rows,
90s for two tables of 5000 rows.  This is ridiculous.  And I need to
run it on two tables of approx. 500,000 rows.  I need a better
solution.

And there is an index on ps.company_id and c.id.  Any suggestions on
how I can improve my query?

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Question And Problem

2006-11-23 Thread John Kopanas

That is awesome... thanks.  I still am not sure exactly though why
this take 2 seconds while my methond took over a minute for the same
amount of rows.  In essence don't the two methods do the same things?

On 11/23/06, mos [EMAIL PROTECTED] wrote:

At 05:50 PM 11/23/2006, you wrote:
I have the following query:

  UPDATE companies c
  SET
total_annual_service_charge =
  (
SELECT SUM(annual_service_charge)
FROM purchased_services ps WHERE ps.company_id = c.id
  );


It takes 1s to run when I have two tables of 500 rows, 4s with two
tables of 1000 rows, 15s to run with two tables to run with 2000 rows,
90s for two tables of 5000 rows.  This is ridiculous.  And I need to
run it on two tables of approx. 500,000 rows.  I need a better
solution.

And there is an index on ps.company_id and c.id.  Any suggestions on
how I can improve my query?

--
John Kopanas
[EMAIL PROTECTED]

John,
 I would split it into separate sql statements.

1) Create a memory table of the totals:

drop table if exists CompanyTotals;
create table CompanyTotals type=memory select Company_Id Id,
SUM(annual_service_charge) ServCharge
FROM purchased_services ps group by Company_Id;
alter table CompanyTotals add index ix_Id (Id);

2) Update the Companies table with the CompanyTotals:
update Companies C, CompanyTotals CT set
total_annual_service_charge=CT.ServCharge where C.Id=CT.Id;

This should be much faster.

Mike



http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Updating One Table With the Sum() of a Grouping From Another...

2006-11-22 Thread John Kopanas

I have two tables: companies, purchased_services... for each company
in the companies table I want to sum up the annual_service_charge for
each row that is associated with teh company in purchased_services.

I have the following query:

SELECT company_id, SUM(annual_service_charge) FROM purchased_services
GROUP BY company_id;

This above gives me the company_id and service charge.  How do I now
put that SUM(annual_service_charge) into
companies.total_annual_service_charge?

Thanks my friends :-).


--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Updating One Table With the Sum() of a Grouping From Another...

2006-11-22 Thread John Kopanas

Ahhh... I did the following:

UPDATE companies c
SET total_annual_service_charge = (SELECT SUM(annual_service_charge)
FROM purchased_services ps WHERE ps.company_id = c.id);

What think you? :-)

On 11/22/06, John Kopanas [EMAIL PROTECTED] wrote:

I have two tables: companies, purchased_services... for each company
in the companies table I want to sum up the annual_service_charge for
each row that is associated with teh company in purchased_services.

I have the following query:

SELECT company_id, SUM(annual_service_charge) FROM purchased_services
GROUP BY company_id;

This above gives me the company_id and service charge.  How do I now
put that SUM(annual_service_charge) into
companies.total_annual_service_charge?

Thanks my friends :-).


--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How Long Should It Take To Create an Index?

2006-11-17 Thread John Kopanas

I have a mysql table with 100K rows approximately.

We are creating the following index:
create index closed_base_cust_and_job on backlog_dev.closed_bases(cust_no,
jobno);

Both cust_no and jobno are varchars(255)

There is 56 fields in the table and no other indeces except on the primary
key.

Should it be taking a long time?  How long would some estimate it takes?  Is
there a way to estimate how long it takes?  What factors affect the time it
takes to create an index?

Thanks for your help.

Your Friend,

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info


Re: Re: How Long Should It Take To Create an Index?

2006-11-17 Thread John Kopanas

That is what I thought.  Then why would it be going at 99% CPU for over 20
minutes?  What factors would cause that?  We are using version 4.1.20standard.

Would the fact that a website was still connected to DB cause the problem?

Dan you rock!

On 11/17/06, Dan Buettner [EMAIL PROTECTED] wrote:


Scratch that, I just created a 10 row table with 2 varchar255's.
Creating a 2 col index on it took 2.09 seconds.  Could take longer due
to all the other fields you have in your table but a couple of minutes
at the outside.

Dan


On 11/17/06, Dan Buettner [EMAIL PROTECTED] wrote:
 John, I would guess with about 100,000 rows, it might take a couple
 minutes to create an index on two varchar-255 columns.  With modern
 hardware anyway. Very rough estimate.

 Factors include amount of RAM, speed of disks, speed of processors,
 other processes running and either locking table or just using
 resources.

 Do a SHOW PROCESSLIST; to see if your index creation is blocked.

 Dan


 On 11/17/06, John Kopanas [EMAIL PROTECTED] wrote:
  I have a mysql table with 100K rows approximately.
 
  We are creating the following index:
  create index closed_base_cust_and_job on
backlog_dev.closed_bases(cust_no,
  jobno);
 
  Both cust_no and jobno are varchars(255)
 
  There is 56 fields in the table and no other indeces except on the
primary
  key.
 
  Should it be taking a long time?  How long would some estimate it
takes?  Is
  there a way to estimate how long it takes?  What factors affect the
time it
  takes to create an index?
 
  Thanks for your help.
 
  Your Friend,
 
  --
  John Kopanas
  [EMAIL PROTECTED]
 
  http://www.kopanas.com
  http://www.cusec.net
  http://www.soen.info
 
 






--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info


How Do I Know If mySQL is using MyISAM or InnoDB?

2006-11-17 Thread John Kopanas

Is there a command at the command line that can tell me if I am using MyISAM
or InnoDB?  Thanks :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info


Importing Text File Into mySQL

2006-11-17 Thread John Kopanas

I have a text file with over 500K rows of data in it.  The problem is
that the data is no seperated by commas but instead  occupy a certain
amount of characters.  So for instance:

ID 1 -11
NAME 12-50
COMPANY_NAME 51-100
...

How would you parse import this data into mysql?

Thanks for your help :-).

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Importing Text File Into mySQL

2006-11-17 Thread John Kopanas

I am trying to figure out how this would work?  How does LOAD DATA
figure out when one column begins and another ends when some of the
data are addresses with spaces in them?

On 11/17/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 17), John Kopanas said:
 I have a text file with over 500K rows of data in it.  The problem is
 that the data is no seperated by commas but instead  occupy a certain
 amount of characters.  So for instance:

 ID 1 -11
 NAME 12-50
 COMPANY_NAME 51-100
 ...

 How would you parse import this data into mysql?

Create a table that matches your layout exactly, then

LOAD DATA ...
FIELDS TERMINATED BY ''
FIELDS ENCLOSED BY '';

It's mentioned in one of the comments at
http://dev.mysql.com/doc/refman/5.0/en/load-data.html

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]