RE: Space is filling up

2004-07-08 Thread Jack Coxen
Probably not.  At this point, I'd say his options are limited to either
deleting data to free up some space or getting a bigger hard drive.  Unless
his situation is like mine and he only needs to keep the data for a certain
period of time, the bigger hard drive is definitely the way to go.  And
since hard drives are so cheap now, I'd be tempted to go that route even if
he can age out data.  I'm seeing ads for 200G drives for $90.00.

Jack

-Original Message-
From: gerald_clark [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 07, 2004 5:06 PM
To: 'Asif Iqbal'
Cc: [EMAIL PROTECTED]
Subject: Re: Space is filling up




Jack Coxen wrote:

 If you database contains time-based data you could age out old 
 records.  I only need to keep data for 6 months so I run a nightly 
 script to delete any records more than 6 months old.  And before 
 anyone asks...yes, I also run another script to ANALYZE/OPTIMIZE my 
 tables.

But does he even have enough disk space to optimize tables?

 Jack


 -Original Message-
 From: Asif Iqbal [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 07, 2004 2:07 PM
 To: gerald_clark
 Cc: [EMAIL PROTECTED]
 Subject: Re: Space is filling up


 gerald_clark wrote:
  What about getting a bigger drive?

 I guess that would be my only option eh?

 
  Asif Iqbal wrote:
 
  Hi All
  
  My mysql data dir is filled up to 95% and only 345M left.
  
  Here is the dir looks like
  
  1.9Gibdata1
  3.1Gibdata2
  3.0Kib_arch_log_02
  3.0Kib_arch_log_04
  25K ib_arch_log_00
  65M rt3
  67K webrt.err
  70K mysql
  101Mib_logfile0
  101Mib_logfile1
  
  Here is my my.cnf file looks like
  
  [..related portion..]
  
  innodb_data_home_dir = /usr/local/mysql/data/
  innodb_data_file_path = ibdata1:1882M;ibdata2:10M:autoextend
  innodb_log_group_home_dir = /usr/local/mysql/data/
  innodb_log_arch_dir = /usr/local/mysql/data/
  # You can set .._buffer_pool_size up to 50 - 80 %
  # of RAM but beware of setting memory usage too high
  innodb_buffer_pool_size = 384M
  innodb_additional_mem_pool_size = 20M
  # Set .._log_file_size to 25 % of buffer pool size
  innodb_log_file_size = 100M
  innodb_log_buffer_size = 8M
  innodb_flush_log_at_trx_commit = 1
  innodb_lock_wait_timeout = 50
  
  Any help to clean the dir would be greatly appreciated
  
  Thanks
  
  
 
 

 -- 
 Asif Iqbal
 PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
 There's no place like 127.0.0.1

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Space is filling up

2004-07-07 Thread Jack Coxen
If you database contains time-based data you could age out old records.  I
only need to keep data for 6 months so I run a nightly script to delete any
records more than 6 months old.  And before anyone asks...yes, I also run
another script to ANALYZE/OPTIMIZE my tables.

Jack


-Original Message-
From: Asif Iqbal [mailto:[EMAIL PROTECTED]
Sent: Wednesday, July 07, 2004 2:07 PM
To: gerald_clark
Cc: [EMAIL PROTECTED]
Subject: Re: Space is filling up


gerald_clark wrote:
 What about getting a bigger drive?

I guess that would be my only option eh?

 
 Asif Iqbal wrote:
 
 Hi All
 
 My mysql data dir is filled up to 95% and only 345M left.
 
 Here is the dir looks like
 
 1.9Gibdata1
 3.1Gibdata2
 3.0Kib_arch_log_02
 3.0Kib_arch_log_04
 25K ib_arch_log_00
 65M rt3
 67K webrt.err
 70K mysql
 101Mib_logfile0
 101Mib_logfile1
 
 Here is my my.cnf file looks like
 
 [..related portion..]
 
 innodb_data_home_dir = /usr/local/mysql/data/
 innodb_data_file_path = ibdata1:1882M;ibdata2:10M:autoextend
 innodb_log_group_home_dir = /usr/local/mysql/data/
 innodb_log_arch_dir = /usr/local/mysql/data/
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 innodb_buffer_pool_size = 384M
 innodb_additional_mem_pool_size = 20M
 # Set .._log_file_size to 25 % of buffer pool size
 innodb_log_file_size = 100M
 innodb_log_buffer_size = 8M
 innodb_flush_log_at_trx_commit = 1
 innodb_lock_wait_timeout = 50
 
 Any help to clean the dir would be greatly appreciated
 
 Thanks
  
 
 
 

-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
There's no place like 127.0.0.1

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: How to turn rows into columns

2004-06-29 Thread Jack Coxen
You might also want to read these articles:

http://www.onlamp.com/lpt/a/4397

and

http://dev.mysql.com/tech-resources/articles/wizard/index.html

In addition, there's a Perl module - DBIx::SQLCrosstab - that may help.
It's available from CPAN at
http://search.cpan.org/~gmax/DBIx-SQLCrosstab-1.17.

Hope this helps,

Jack

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 2:34 PM
To: [EMAIL PROTECTED]
Cc: MySql List
Subject: Re: How to turn rows into columns



What you are trying to create is called a pivot table or a cross-tab
report. I recently walked someone else through this same process. Read
through this thread and let me know if it helps you or not. You can also
Google the groups for MySQL pivot table and find other examples and
methods.

http://groups.google.com/groups?hl=enlr=ie=UTF-8safe=offthreadm=cautac%2
43u8%241%40FreeBSD.csie.NCTU.edu.twrnum=1prev=/groups%3Fq%3DMySQL%2Bpivoti
ng%2Btable%2Bsgreen%26ie%3DUTF-8%26hl%3Den

Best Wishes,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


 

  Mauricio

  Pellegrini   To:   MySql List
[EMAIL PROTECTED]
  [EMAIL PROTECTED]cc:

  com.ar  Fax to:

   Subject:  How to turn rows
into columns 
  06/29/2004 04:35

  PM

  Please respond to

  hrrg-inf

 

 





Hi , Sorry to ask this question,
but I'm in desperate need to acomplish this report, and don't even know
it is  possible to be done.

The thing is I have to turn rows from one table into columns
in other table. The first table looks like this
 T1
---
id   cod
---
1bb
1ff
1gg
1kk
2rr
---

An it should be converted to the following format where every cod value
pertaining to T1 should be located into T2 under the specified column

 T2
---
id   c1  c2  c3  c4
---
1bb  ff  gg  kk
2rr
---

I know how to perform this under other languages but don't know how to
do it with mysql's sql.

I think someone else must have had the same requirement before, at least
I hope so.


Thanks in advance,
Mauricio







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


DISTINCT query

2004-06-01 Thread Jack Coxen
I'm having trouble with a query and hope someone of the list can help.  I'm
running version 4.0.16.  I'm trying to query against two tables:

mysql DESCRIBE router;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| rid   | int(11) unsigned|  | PRI | NULL| auto_increment |
| name  | char(120)   |  | | ||
| pop   | char(10)|  | | ||
| popid | tinyint(3) unsigned |  | | 0   ||
+---+-+--+-+-++
4 rows in set (0.04 sec)

mysql DESCRIBE interface;
+-+---+--+-+-+--
--+
| Field   | Type  | Null | Key | Default | Extra
|
+-+---+--+-+-+--
--+
| id  | int(11) unsigned  |  | PRI | NULL|
auto_increment |
| name| char(255) |  | | |
|
| rid | int(11)   |  | | 0   |
|
| speed   | bigint(11)| YES  | | NULL|
|
| description | char(255) | YES  | | NULL|
|
| status  | enum('active','inactive') | YES  | | active  |
|
+-+---+--+-+-+--
--+
6 rows in set (0.02 sec)

using the following query:

SELECT router.name AS Router,
SUM(IF(speed='64000',1,0)) AS 64K,
SUM(IF(speed='128000',1,0)) AS 128K,
SUM(IF(speed='192000',1,0)) AS 192K,
SUM(IF(speed='256000',1,0)) AS 256K,
SUM(IF(speed='384000',1,0)) AS 384K,
SUM(IF(speed='512000',1,0)) AS 512K,
SUM(IF(speed='768000',1,0)) AS 768K,
SUM(IF(speed='1024000',1,0)) AS 1M,
SUM(IF(speed='1152000',1,0)) AS 1152K,
SUM(IF(speed='128',1,0)) AS 1280K,
SUM(IF(speed='1536000'
OR speed='1544000',1,0)) AS 'V/T1',
SUM(IF(speed='300',1,0)) AS 3M,
SUM(IF(speed='600',1,0)) AS 6M,
SUM(IF(speed='900',1,0)) AS 9M,
SUM(IF(speed='1000',1,0)) AS 10M,
SUM(IF(speed='1200',1,0)) AS 12M,
SUM(IF(speed='2400',1,0)) AS 24M,
SUM(IF(speed'4000'
AND speed'4600',1,0)) AS 'DS3/T3',
SUM(IF(speed='1',1,0)) AS 100M,
SUM(IF(speed='15500',1,0)) AS OC3,
SUM(IF(speed'59900'
AND speed'65000',1,0)) AS OC12,
SUM(IF(speed='10',1,0)) AS Gigabit,
SUM(IF(speed='115000',1,0)) AS Dialup,
SUM(IF(speed'64000'
AND speed!='128000'
AND speed!='192000'
AND speed!='256000'
AND speed!='384000'
AND speed!='512000'
AND speed!='768000'
AND speed!='1024000'
AND speed!='1152000'
AND speed!='128'
AND speed!='1536000'
AND speed!='1544000'
AND speed!='300'
AND speed!='600'
AND speed!='900'
AND speed!='1000'
AND speed!='1200'
AND speed!='2400'
AND speed NOT BETWEEN '4000' AND '4600'
AND speed!='1'
AND speed!='15500'
AND speed NOT BETWEEN '59900' AND '65000'
AND speed!='10'
AND speed!='115000',1,0)) AS Other,
COUNT(*) AS Total
FROM router INNER JOIN interface USING (rid)
GROUP BY router.rid
ORDER BY router.name;

The query as written runs fine.  My problem is that I want to SELECT only
DISTINCT values for interface.description and not display the descriptions
in the result.

Is this possible?  And if so, how do I do it?

Thanks,

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: DISTINCT query

2004-06-01 Thread Jack Coxen
The output looks like this...

+--+-+--+--+ ~snip~ +---+
| Router   | 64K | 128K | 192K | ~snip~ | Total |
+--+-+--+--+ ~snip~ +---+
| router_1 |   0 |0 |0 | ~snip~ | 6 |
| router_2 |   1 |6 |0 | ~snip~ |70 |
| router_3 |   0 |0 |0 | ~snip~ | 3 |
| router_4 |   0 |0 |0 | ~snip~ | 3 |
| router_5 |   0 |0 |0 | ~snip~ | 1 |

interface.description isn't listed and I'd like it to stay this way.  

interface.description is usually a unique value.  However, when an interface
gets moved, a new record gets created with the same interface.description
but different interface.id and interface.name values.  I'd like to eliminate
those semi-duplicate records from the count.  I know...I could modify the
records when an interface changes but for various reasons I need to keep
both the before and after data.

Jack

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 01, 2004 1:52 PM
To: Jack Coxen
Cc: MySQL List (E-mail)
Subject: Re: DISTINCT query



Jack,

I am not sure what you want your output to look like... If you want the
same columns (the routerid with the counts of how many interfaces operate
at which speed) but broken down by unique descriptions that means to me
that you have multiple descriptions for each unique interface ID... Is this
the case? If it isn't then your existing GROUP BY will already make your
report unique by interface.comment (because there is only 1 comment per ID,
understand?)

If you could provide a prototype header to the report you want to make I
will try my best to help you work it out.

Yours,

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine




 

  Jack Coxen

  [EMAIL PROTECTED]To:   MySQL List
(E-mail) [EMAIL PROTECTED]
  ve.com  cc:

   Fax to:

  06/01/2004 01:12 Subject:  DISTINCT query

  PM

 

 





I'm having trouble with a query and hope someone of the list can help.  I'm
running version 4.0.16.  I'm trying to query against two tables:

mysql DESCRIBE router;
+---+-+--+-+-++
| Field | Type| Null | Key | Default | Extra  |
+---+-+--+-+-++
| rid   | int(11) unsigned|  | PRI | NULL| auto_increment |
| name  | char(120)   |  | | ||
| pop   | char(10)|  | | ||
| popid | tinyint(3) unsigned |  | | 0   ||
+---+-+--+-+-++
4 rows in set (0.04 sec)

mysql DESCRIBE interface;
+-+---+--+-+-+--

--+
| Field   | Type  | Null | Key | Default | Extra
|
+-+---+--+-+-+--

--+
| id  | int(11) unsigned  |  | PRI | NULL|
auto_increment |
| name| char(255) |  | | |
|
| rid | int(11)   |  | | 0   |
|
| speed   | bigint(11)| YES  | | NULL|
|
| description | char(255) | YES  | | NULL|
|
| status  | enum('active','inactive') | YES  | | active  |
|
+-+---+--+-+-+--

--+
6 rows in set (0.02 sec)

using the following query:

SELECT router.name AS Router,
SUM(IF(speed='64000',1,0)) AS 64K,
SUM(IF(speed='128000',1,0)) AS 128K,
SUM(IF(speed='192000',1,0)) AS 192K,
SUM(IF(speed='256000',1,0)) AS 256K,
SUM(IF(speed='384000',1,0)) AS 384K,
SUM(IF(speed='512000',1,0)) AS 512K,
SUM(IF(speed='768000',1,0)) AS 768K,
SUM(IF(speed='1024000',1,0)) AS 1M,
SUM(IF(speed='1152000',1,0)) AS 1152K,
SUM(IF(speed='128',1,0)) AS 1280K,
SUM(IF(speed='1536000'
OR speed='1544000',1,0)) AS 'V/T1',
SUM(IF(speed='300',1,0)) AS 3M,
SUM(IF(speed='600',1,0)) AS 6M,
SUM(IF(speed='900',1,0)) AS 9M,
SUM(IF(speed='1000',1,0)) AS 10M,
SUM(IF(speed='1200',1,0)) AS 12M,
SUM(IF(speed='2400',1,0)) AS 24M,
SUM(IF(speed'4000'
AND speed'4600',1,0)) AS 'DS3/T3',
SUM(IF(speed='1',1,0)) AS 100M,
SUM(IF(speed='15500',1,0)) AS OC3,
SUM(IF(speed'59900'
AND speed'65000',1,0)) AS OC12,
SUM(IF(speed='10',1,0)) AS Gigabit,
SUM(IF(speed='115000',1,0)) AS Dialup,
SUM(IF(speed'64000'
AND speed!='128000'
AND speed!='192000'
AND speed!='256000'
AND speed!='384000'
AND speed!='512000

RE: Case Sensitive

2004-05-12 Thread Jack Coxen
This isn't caused by MySQL.  It's an OS issue.  Linux and Unix are case
sensitive operating systems so any program you run, unless it is
specifically written to be NON-case sensitive, is case sensitive as well.

Jack

-Original Message-
From: Andre MATOS [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 12, 2004 10:53 AM
To: [EMAIL PROTECTED]
Subject: Case Sensitive


Hi List,
  
I have three MySQL servers. One is running on Windows XP, another on MacOS 
X, and finally one on Linux. All servers have the same version: 4.0.18.
  
Every time when I create tables on the servers Windows XP and MacOS, MySQl 
creates the database without creating problems with the tables names (is 
not case sensitive). However, in the Linux I have problems (it follows 
exactly the name  uppercase and lowercase  is case sensitive).

So, for example:
On Windows/Mac: TestABC is created as testabc
On Linux: TestABC is created as TestABC
 
1) How can I disable this in the MySQL server running on Linux?
 
2) What is better? Disable or enable the others two?

By the way, I have the same problem with the usernames and passwords. How 
can I enable/disable MySQL case sensitive for data?

Thanks.
  
Andre

-- 
Andre Matos
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: many updates really slow

2004-05-06 Thread Jack Coxen
I'm guessing here but it sounds like you have the cmd.sql file on your local
machine (in Europe).  If that's the case, compress the file, ftp it to the
US server, uncompress it and do the load locally on the US server.

If you're doing the update remotely, you're probably being killed by
transmission time.  Transferring the file in toto to the US server will be
quicker than having the communications lag of doing sending a command from
Europe to the US, having the command execute and then receiving the status
back in Europe...times 5000 records.

Hope this helps,

Jack

-Original Message-
From: William Wang [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 06, 2004 1:12 PM
To: [EMAIL PROTECTED]
Subject: many updates really slow


Hi everyone,

Please help.

I have MySQL server running on host A in US and I am
using it on host B in Europe. Every query takes about
0.3 seconds.

Now I want to do update db with 5000 updates. So I put
all the UPDATES commands in a file cmd.sql and do:

mysql -hA  cmd.sql

and it takes 30 MINUTES which is reasonable: 0.3 *
5000 but unbearable.

Is there any better way to do this? Am I doing
something stupid? Thanks a lot.




__
Do you Yahoo!?
Win a $20,000 Career Makeover at Yahoo! HotJobs  
http://hotjobs.sweepstakes.yahoo.com/careermakeover 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: many updates really slow

2004-05-06 Thread Jack Coxen
I'd suggest a shell script - a couple of them, actually.  Have a shell
script on the Europe machine that runs continually.  It checks for the
existence of the cmd.sql file.  If it finds it, it FTPs it to the US server
and then deletes or archives the file locally.  If it doesn't find it, it
sleeps for 5 minutes, wakes up and looks again - minimum cpu cycles wasted.

The second script would be on the US server.  It would check every 5 minutes
for the cmd.sql file.  If it finds it, it would load it into MySQL and then
either delete or archive the file.  If it doesn't find the file, it goes
back to sleep.

Jack

-Original Message-
From: William Wang [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 06, 2004 1:43 PM
To: Jack Coxen; [EMAIL PROTECTED]
Subject: RE: many updates really slow


Thanks Jack and Jeremy.

But this is part of my application and I need to do
this automatically. I don't want to write a
server-client application to just to handle file
transimission and do pass the update cmd to local
server. Any idea? Thanks.

William

--- Jack Coxen [EMAIL PROTECTED] wrote:
 I'm guessing here but it sounds like you have the
 cmd.sql file on your local
 machine (in Europe).  If that's the case, compress
 the file, ftp it to the
 US server, uncompress it and do the load locally on
 the US server.
 
 If you're doing the update remotely, you're probably
 being killed by
 transmission time.  Transferring the file in toto to
 the US server will be
 quicker than having the communications lag of doing
 sending a command from
 Europe to the US, having the command execute and
 then receiving the status
 back in Europe...times 5000 records.
 
 Hope this helps,
 
 Jack
 
 -Original Message-
 From: William Wang [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 06, 2004 1:12 PM
 To: [EMAIL PROTECTED]
 Subject: many updates really slow
 
 
 Hi everyone,
 
 Please help.
 
 I have MySQL server running on host A in US and I am
 using it on host B in Europe. Every query takes
 about
 0.3 seconds.
 
 Now I want to do update db with 5000 updates. So I
 put
 all the UPDATES commands in a file cmd.sql and do:
 
 mysql -hA  cmd.sql
 
 and it takes 30 MINUTES which is reasonable: 0.3 *
 5000 but unbearable.
 
 Is there any better way to do this? Am I doing
 something stupid? Thanks a lot.
 
 
   
   
 __
 Do you Yahoo!?
 Win a $20,000 Career Makeover at Yahoo! HotJobs  
 http://hotjobs.sweepstakes.yahoo.com/careermakeover 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 





__
Do you Yahoo!?
Win a $20,000 Career Makeover at Yahoo! HotJobs  
http://hotjobs.sweepstakes.yahoo.com/careermakeover 


Complicated query problem

2004-05-05 Thread Jack Coxen
I've got the following query that produces a large table for me.

SELECT
router.name AS Router,
SUM(IF(speed='64000',1,0)) AS 64K,
SUM(IF(speed='128000',1,0)) AS 128K,
SUM(IF(speed='192000',1,0)) AS 192K,
SUM(IF(speed='256000',1,0)) AS 256K,
SUM(IF(speed='384000',1,0)) AS 384K,
SUM(IF(speed='512000',1,0)) AS 512K,
SUM(IF(speed='768000',1,0)) AS 768K,
SUM(IF(speed='1024000',1,0)) AS 1M,
SUM(IF(speed='1152000',1,0)) AS 1152K,
SUM(IF(speed='128',1,0)) AS 1280K,
SUM(IF(speed='1536000'
OR speed='1544000',1,0)) AS 'V/T1',
SUM(IF(speed='300',1,0)) AS 3M,
SUM(IF(speed='600',1,0)) AS 6M,
SUM(IF(speed='900',1,0)) AS 9M,
SUM(IF(speed='1000',1,0)) AS 10M,
SUM(IF(speed='1200',1,0)) AS 12M,
SUM(IF(speed='2400',1,0)) AS 24M,
SUM(IF(speed BETWEEN '4000' AND '4600',1,0)) AS 'DS3/T3',
SUM(IF(speed='1',1,0)) AS 100M,
SUM(IF(speed='15500',1,0)) AS OC3,
SUM(IF(speed BETWEEN '59900' AND '65000',1,0)) AS OC12,
SUM(IF(speed='10',1,0)) AS Gigabit,
SUM(IF(speed='115000',1,0)) AS Dialup,
SUM(IF(speed'64000'
AND speed!='128000'
AND speed!='192000'
AND speed!='256000'
AND speed!='384000'
AND speed!='512000'
AND speed!='768000'
AND speed!='1024000'
AND speed!='1152000'
AND speed!='128'
AND speed!='1536000'
AND speed!='1544000'
AND speed!='300'
AND speed!='600'
AND speed!='900'
AND speed!='1000'
AND speed!='1200'
AND speed!='2400'
AND speed NOT BETWEEN '4000' AND '4600'
AND speed!='1'
AND speed!='15500'
AND speed NOT BETWEEN '59900' AND '65000'
AND speed!='10'
AND speed!='115000',1,0)) AS Other,
COUNT(*) AS Total
FROM router INNER JOIN interface USING (rid)
GROUP BY router.rid
ORDER BY router.name;

The query as written works just fine although I'm certain there's got to be
a more efficient way of doing the same thing.  I'm relatively new to MySQL
so I took the brute force approach.

My problem is that I want to produce totals of each of the columns and can't
figure out how to do it.  Any suggestions on how I can do this?

Thanks,

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: when 4.1.2 release

2004-04-20 Thread Jack Coxen
Is there a planned release date for the production version of 4.1 x?  I'd
love to be able to use sub-queries but I don't want to use a non-production
version on a production database.

Jack

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Monday, April 19, 2004 6:30 PM
To: electroteque; Victoria Reznichenko; [EMAIL PROTECTED]
Subject: RE: when 4.1.2 release


At 7:11 +1000 4/20/04, electroteque wrote:
Huh as in production ?

Surely you jest.  4.1.1 was an alpha release.  It's unlikely
that 4.1.x would go straight to production-level in a single
step.  I believe that 4.1.2 will also be an alpha release.

That said, I encourage you to go ahead and try it out.  It
has lots of bugfixes and new stuff:

http://dev.mysql.com/doc/mysql/en/News-4.1.2.html


  -Original Message-
  From: Victoria Reznichenko [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, April 20, 2004 5:52 AM
  To: [EMAIL PROTECTED]
  Subject: Re: when 4.1.2 release


  Marek Lewczuk [EMAIL PROTECTED] wrote:
   Hello,
   when do you plan to release 4.1.2 version ?
  

   It will be released in several weeks.


-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: export to csv or excel

2004-04-15 Thread Jack Coxen
I use a PERL module called Spreadsheet::WriteExcel.  You can get it from
CPAN at http://search.cpan.org/~jmcnamara/Spreadsheet-WriteExcel-0.42/.  

I generate tons of reports monthly using cron to fire off PERL scripts that
output my data as Excel spreadsheets using this module.  Implementation is
fairly easy and the documentation that comes with the module is clearly
written with tons of examples.

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705

-Original Message-
From: Bart De Man [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 6:59 AM
To: [EMAIL PROTECTED]
Subject: export to csv or excel


Hi,

Do you know a good tool to export mysql tables to csv or excel files? It
should be able to run on a Linux system, and should be able to be scheduled
(cron job).
I worked with EMS Mysql Export on a Windows system, this worked fine, but I
can't get it to work on the Linux system.
I also tried with SELECT ... INTO OUTFILE ... but this gives problems when
there are end-of-line characters in the data.

Thanks,

Bart


Query in MySQL vs. PERL

2004-04-06 Thread Jack Coxen
I'm using MySQL to track data throughput information on my company's
routers.  The routers are queried every 5 minutes 24 hours/day.  I need to
produce a report showing data accumulated during business hours - Monday
through Friday, 8:00-5:00.

The program to pull the data and manipulate it will be written in PERL.
What I don't know is, will it be more efficient to have the PERL program
query data from MySQL only for business hours or should I write the MySQL
query (and sub-query) to only SELECT data from business hours.  I can do
either one, I just don't know which one would be best.

I'm running MySQL 4.0.16 standard on a Sun e250 running Solaris 8.  The
database consists of around 650 tables ranging in size from 20 KB to 3-4 GB
(approx. 12 million records).

Any help and advice would be greatly appreciated.  If more info is needed,
please let me know.

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: Index not functioning - update

2004-03-29 Thread Jack Coxen
A quick update on this.

I tried upgrading to the latest production MySQL - 4.0.18 - with no change.
So, if it's a bug, it's an unreported and uncorrected bug.

Also,  I tried the query against different size tables.  If I try the query
against ifInOctets_35, the index works.  If I try it against ifInOctets_83,
the index doesn't work.  The files differ as follows:

TABLE NAME  ROWSFILE SIZE   INDEX WORKED?
ifInOctets_35   41240   865977  Yes
ifInOctets_83   41923   880362  No

I have no idea what (if anything) this means but the query works
consistently on tables smaller than ifInOctets_35 and consistently doesn't
work on tables larger than ifInOctets_83 - at least so far.

I know that a compound index of id and dtime would fix the problem but it
would also leave me with a non-standard installation.  I'll do that if I
have to but I'd prefer to fix the existing setup of a single index on dtime.

Jack

-Original Message-
From: Jack Coxen [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 24, 2004 1:51 PM
To: '[EMAIL PROTECTED]'; MySQL List (E-mail)
Subject: RE: Index not functioning


I tried using 'USE INDEX' and 'BETWEEN' but it didn't change anything.

EXPLAIN SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137
USE INDEX (dtime) WHERE id=2809 AND dtime BETWEEN FROM_UNIXTIME(107397)
AND FROM_UNIXTIME(1076734799) ORDER BY dtime;

++--+---+--+-+--+-+-
+
| table  | type | possible_keys | key  | key_len | ref  |
rows| Extra   |

++--+---+--+-+--+-+-
+
| ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL |
9279150 | Using where; Using filesort |

++--+---+--+-+--+-+-
+
1 row in set (0.00 sec)

Is my syntax wrong?   

The resultset size should be around 8640 rows - 5 minute interval data for
30 days - 12 X 24 X 30 = 8640

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 24, 2004 1:22 PM
To: Jack Coxen; MySQL List (E-mail)
Subject: RE: Index not functioning


Optimizers work on a thirty percent rule. If the resultset will contain
thirty
percent of the table then a table scan will be performed. If you want to
`force` the use of an index you can use the `USE INDEX index-name` syntax.
You also should look at your query. You have three criteria in your where
clause
id=2809
dtimeFROM_UNIXTIME(107397)
dtime=FROM_UNIXTIME(1076734799)
A compound index containing id and dtime should be created.
Can you use a BETWEEN statement?
WHERE id = 2809 AND ( dtime BETWEEN start AND stop)
...
Also what is the sum of your `` and `=` resultsets? This value may be
larger
than you expect.

  On Wed, 24 Mar 2004,
Jack Coxen wrote:  
 Is there a way of changing what it believes?  The resultset size is only
 a few thousand lines.  
 
 And, btw, why does it believe that a table scan is more efficient than
 using an index? 
 
 Jack 
 
 -Original Message- 
 From: Victor Pendleton [ mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] ] 
 Sent: Wednesday, March 24, 2004 12:24 PM 
 To: Jack Coxen; MySQL List (E-mail) 
 Subject: Re: Index not functioning 
 
 
 The optimizer is informing you that `it` believes a table scan is more 
 efficient than using an index due to the resultset size. 
 
  On Wed, 
 24 Mar 2004, Jack Coxen wrote:  
  I have a series of tables that were created by: 
  
CREATE TABLE ifInOctets ( 
  id int(11) NOT NULL default '0', 
  dtime datetime NOT NULL default '-00-00 00:00:00', 
  counter bigint(20) NOT NULL default '0', 
  KEY ifInOctets_idx (dtime) 
); 
  
  When I run a query against any of the tables, the index isn't used.
 The 
  query syntax is: 
  
SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137 WHERE 
  id=2809 AND dtimeFROM_UNIXTIME(107397) AND 
  dtime=FROM_UNIXTIME(1076734799) ORDER BY dtime; 
  
  Running an EXPLAIN of that command gives: 
  

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
| table  | type | possible_keys | key  | key_len | ref 
  | 
  rows| Extra   | 

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
| ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL 
  | 
  9279150 | Using where; Using filesort | 

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
1 row in set (0.00 sec) 
  
  I'm running on a Sun E250 w/RAID running Solaris 8.  I'm running

RE: Index not functioning - update

2004-03-29 Thread Jack Coxen
No, this is a production system.  I can't risk running non-production level
software.

I could try 4.1.1 on my development system but, even if it worked, I still
couldn't put that solution into place on the production server.

Jack

-Original Message-
From: dan [mailto:[EMAIL PROTECTED]
Sent: Monday, March 29, 2004 12:24 PM
To: Jack Coxen
Cc: '[EMAIL PROTECTED]'; MySQL List (E-mail)
Subject: RE: Index not functioning - update


just out of curiousity,
have you tried the 4.1.1 version?

there may (or may not) be something from 4.1.0 to 4.1.1
could be a total waste of time, but you never know.

Dan.

On Mon, 29 Mar 2004, Jack Coxen wrote:

 A quick update on this.
 
 I tried upgrading to the latest production MySQL - 4.0.18 - with no
change.
 So, if it's a bug, it's an unreported and uncorrected bug.
 
 Also,  I tried the query against different size tables.  If I try the
query
 against ifInOctets_35, the index works.  If I try it against
ifInOctets_83,
 the index doesn't work.  The files differ as follows:
 
 TABLE NAMEROWSFILE SIZE   INDEX WORKED?
 ifInOctets_35 41240   865977  Yes
 ifInOctets_83 41923   880362  No
 
 I have no idea what (if anything) this means but the query works
 consistently on tables smaller than ifInOctets_35 and consistently doesn't
 work on tables larger than ifInOctets_83 - at least so far.
 
 I know that a compound index of id and dtime would fix the problem but it
 would also leave me with a non-standard installation.  I'll do that if I
 have to but I'd prefer to fix the existing setup of a single index on
dtime.
 
 Jack
 
 -Original Message-
 From: Jack Coxen [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, March 24, 2004 1:51 PM
 To: '[EMAIL PROTECTED]'; MySQL List (E-mail)
 Subject: RE: Index not functioning
 
 
 I tried using 'USE INDEX' and 'BETWEEN' but it didn't change anything.
 
   EXPLAIN SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137
 USE INDEX (dtime) WHERE id=2809 AND dtime BETWEEN
FROM_UNIXTIME(107397)
 AND FROM_UNIXTIME(1076734799) ORDER BY dtime;
   

++--+---+--+-+--+-+-
 +
   | table  | type | possible_keys | key  | key_len | ref  |
 rows| Extra   |
   

++--+---+--+-+--+-+-
 +
   | ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL |
 9279150 | Using where; Using filesort |
   

++--+---+--+-+--+-+-
 +
   1 row in set (0.00 sec)
 
 Is my syntax wrong?   
 
 The resultset size should be around 8640 rows - 5 minute interval data for
 30 days - 12 X 24 X 30 = 8640
 
 -Original Message-
 From: Victor Pendleton [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, March 24, 2004 1:22 PM
 To: Jack Coxen; MySQL List (E-mail)
 Subject: RE: Index not functioning
 
 
 Optimizers work on a thirty percent rule. If the resultset will contain
 thirty
 percent of the table then a table scan will be performed. If you want to
 `force` the use of an index you can use the `USE INDEX index-name`
syntax.
 You also should look at your query. You have three criteria in your where
 clause
 id=2809
 dtimeFROM_UNIXTIME(107397)
 dtime=FROM_UNIXTIME(1076734799)
 A compound index containing id and dtime should be created.
 Can you use a BETWEEN statement?
 WHERE id = 2809 AND ( dtime BETWEEN start AND stop)
 ...
 Also what is the sum of your `` and `=` resultsets? This value may be
 larger
 than you expect.
 
   On Wed, 24 Mar 2004,
 Jack Coxen wrote:  
  Is there a way of changing what it believes?  The resultset size is only
  a few thousand lines.  
  
  And, btw, why does it believe that a table scan is more efficient than
  using an index? 
  
  Jack 
  
  -Original Message- 
  From: Victor Pendleton [ mailto:[EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] ] 
  Sent: Wednesday, March 24, 2004 12:24 PM 
  To: Jack Coxen; MySQL List (E-mail) 
  Subject: Re: Index not functioning 
  
  
  The optimizer is informing you that `it` believes a table scan is more 
  efficient than using an index due to the resultset size. 
  
   On Wed, 
  24 Mar 2004, Jack Coxen wrote:  
   I have a series of tables that were created by: 
   
 CREATE TABLE ifInOctets ( 
   id int(11) NOT NULL default '0', 
   dtime datetime NOT NULL default '-00-00 00:00:00', 
   counter bigint(20) NOT NULL default '0', 
   KEY ifInOctets_idx (dtime) 
 ); 
   
   When I run a query against any of the tables, the index isn't used.
  The 
   query syntax is: 
   
 SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137 WHERE 
   id=2809 AND dtimeFROM_UNIXTIME(107397) AND 
   dtime=FROM_UNIXTIME(1076734799) ORDER BY dtime; 
   
   Running an EXPLAIN

Index not functioning

2004-03-24 Thread Jack Coxen
I have a series of tables that were created by:

CREATE TABLE ifInOctets (
  id int(11) NOT NULL default '0',
  dtime datetime NOT NULL default '-00-00 00:00:00',
  counter bigint(20) NOT NULL default '0',
  KEY ifInOctets_idx (dtime)
);

When I run a query against any of the tables, the index isn't used.  The
query syntax is:

SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137 WHERE
id=2809 AND dtimeFROM_UNIXTIME(107397) AND
dtime=FROM_UNIXTIME(1076734799) ORDER BY dtime;

Running an EXPLAIN of that command gives:


++--+---+--+-+--+-+-
+
| table  | type | possible_keys | key  | key_len | ref  |
rows| Extra   |

++--+---+--+-+--+-+-
+
| ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL |
9279150 | Using where; Using filesort |

++--+---+--+-+--+-+-
+
1 row in set (0.00 sec)

I'm running on a Sun E250 w/RAID running Solaris 8.  I'm running MySQL
4.0.16 precompiled Solaris binary.  My my.cnf is essentially the my-huge.cnf
file.  Other people are running this application (RTG -
http://rtg.sourceforge.net) on various platforms and MySQL versions and ARE
NOT having this problem.

I've run ANALYZE and OPTIMIZE against the tables with no effect.  I've
dropped the indexes and recreated them with no effect.  I've done everything
I can think of with no effect.  I am now officially stumped.

Does anyone have any suggestions on what is going on and how I can fix it?
Any help would be greatly appreciated.

Thanks,

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: Index not functioning

2004-03-24 Thread Jack Coxen
Is there a way of changing what it believes?  The resultset size is only a
few thousand lines.  

And, btw, why does it believe that a table scan is more efficient than using
an index?

Jack

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 24, 2004 12:24 PM
To: Jack Coxen; MySQL List (E-mail)
Subject: Re: Index not functioning


The optimizer is informing you that `it` believes a table scan is more
efficient than using an index due to the resultset size.

 On Wed,
24 Mar 2004, Jack Coxen wrote:  
 I have a series of tables that were created by:
 
   CREATE TABLE ifInOctets (
 id int(11) NOT NULL default '0',
 dtime datetime NOT NULL default '-00-00 00:00:00',
 counter bigint(20) NOT NULL default '0',
 KEY ifInOctets_idx (dtime)
   );
 
 When I run a query against any of the tables, the index isn't used.  The
 query syntax is:
 
   SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137 WHERE
 id=2809 AND dtimeFROM_UNIXTIME(107397) AND
 dtime=FROM_UNIXTIME(1076734799) ORDER BY dtime;
 
 Running an EXPLAIN of that command gives:
 
   
 ++--+---+--+-+--+---
 --+-
 +
   | table  | type | possible_keys | key  | key_len | ref
 |
 rows| Extra   |
   
 ++--+---+--+-+--+---
 --+-
 +
   | ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL
 |
 9279150 | Using where; Using filesort |
   
 ++--+---+--+-+--+---
 --+-
 +
   1 row in set (0.00 sec)
 
 I'm running on a Sun E250 w/RAID running Solaris 8.  I'm running MySQL
 4.0.16 precompiled Solaris binary.  My my.cnf is essentially the
 my-huge.cnf
 file.  Other people are running this application (RTG -
 http://rtg.sourceforge.net) on various platforms and MySQL versions and
 ARE
 NOT having this problem.
 
 I've run ANALYZE and OPTIMIZE against the tables with no effect.  I've
 dropped the indexes and recreated them with no effect.  I've done
 everything
 I can think of with no effect.  I am now officially stumped.
 
 Does anyone have any suggestions on what is going on and how I can fix
 it?
 Any help would be greatly appreciated.
 
 Thanks,
 
 Jack
 
 Jack Coxen
 IP Network Engineer
 TelCove
 712 North Main Street
 Coudersport, PA 16915
 814-260-2705
 
 


RE: Index not functioning

2004-03-24 Thread Jack Coxen
I tried using 'USE INDEX' and 'BETWEEN' but it didn't change anything.

EXPLAIN SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137
USE INDEX (dtime) WHERE id=2809 AND dtime BETWEEN FROM_UNIXTIME(107397)
AND FROM_UNIXTIME(1076734799) ORDER BY dtime;

++--+---+--+-+--+-+-
+
| table  | type | possible_keys | key  | key_len | ref  |
rows| Extra   |

++--+---+--+-+--+-+-
+
| ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL |
9279150 | Using where; Using filesort |

++--+---+--+-+--+-+-
+
1 row in set (0.00 sec)

Is my syntax wrong?   

The resultset size should be around 8640 rows - 5 minute interval data for
30 days - 12 X 24 X 30 = 8640

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 24, 2004 1:22 PM
To: Jack Coxen; MySQL List (E-mail)
Subject: RE: Index not functioning


Optimizers work on a thirty percent rule. If the resultset will contain
thirty
percent of the table then a table scan will be performed. If you want to
`force` the use of an index you can use the `USE INDEX index-name` syntax.
You also should look at your query. You have three criteria in your where
clause
id=2809
dtimeFROM_UNIXTIME(107397)
dtime=FROM_UNIXTIME(1076734799)
A compound index containing id and dtime should be created.
Can you use a BETWEEN statement?
WHERE id = 2809 AND ( dtime BETWEEN start AND stop)
...
Also what is the sum of your `` and `=` resultsets? This value may be
larger
than you expect.

  On Wed, 24 Mar 2004,
Jack Coxen wrote:  
 Is there a way of changing what it believes?  The resultset size is only
 a few thousand lines.  
 
 And, btw, why does it believe that a table scan is more efficient than
 using an index? 
 
 Jack 
 
 -Original Message- 
 From: Victor Pendleton [ mailto:[EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] ] 
 Sent: Wednesday, March 24, 2004 12:24 PM 
 To: Jack Coxen; MySQL List (E-mail) 
 Subject: Re: Index not functioning 
 
 
 The optimizer is informing you that `it` believes a table scan is more 
 efficient than using an index due to the resultset size. 
 
  On Wed, 
 24 Mar 2004, Jack Coxen wrote:  
  I have a series of tables that were created by: 
  
CREATE TABLE ifInOctets ( 
  id int(11) NOT NULL default '0', 
  dtime datetime NOT NULL default '-00-00 00:00:00', 
  counter bigint(20) NOT NULL default '0', 
  KEY ifInOctets_idx (dtime) 
); 
  
  When I run a query against any of the tables, the index isn't used.
 The 
  query syntax is: 
  
SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137 WHERE 
  id=2809 AND dtimeFROM_UNIXTIME(107397) AND 
  dtime=FROM_UNIXTIME(1076734799) ORDER BY dtime; 
  
  Running an EXPLAIN of that command gives: 
  

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
| table  | type | possible_keys | key  | key_len | ref 
  | 
  rows| Extra   | 

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
| ifInOctets_137 | ALL  | dtime | NULL |NULL | NULL 
  | 
  9279150 | Using where; Using filesort | 

 
 ++--+---+--+-+--+---
 
  --+- 
  + 
1 row in set (0.00 sec) 
  
  I'm running on a Sun E250 w/RAID running Solaris 8.  I'm running MySQL
 
  4.0.16 precompiled Solaris binary.  My my.cnf is essentially the 
  my-huge.cnf 
  file.  Other people are running this application (RTG - 
  http://rtg.sourceforge.net http://rtg.sourceforge.net ) on various
 platforms and MySQL versions and 
  ARE 
  NOT having this problem. 
  
  I've run ANALYZE and OPTIMIZE against the tables with no effect.  I've
 
  dropped the indexes and recreated them with no effect.  I've done 
  everything 
  I can think of with no effect.  I am now officially stumped. 
  
  Does anyone have any suggestions on what is going on and how I can fix
 
  it? 
  Any help would be greatly appreciated. 
  
  Thanks, 
  
  Jack 
  
  Jack Coxen 
  IP Network Engineer 
  TelCove 
  712 North Main Street 
  Coudersport, PA 16915 
  814-260-2705 
  
  
 
 


Content-Type: text/html; name=unnamed
Content-Transfer-Encoding: quoted-printable
Content-Description: 



RE: Re[2]: Index not functioning

2004-03-24 Thread Jack Coxen
The index is on dtime only.  I've done some testing and dual-column indexes
using id and dtime are significantly faster.  However, I'm not one of the
developers of this package - RTG.  I have reported the speed increase to
them, though.

In the meantime, I'm trying to run a standard system so I'm hesitant to
modify the index structure.  I don't want to run into problems with future
revisions.  Right now, I just want my existing indexes to work.

Jack

-Original Message-
From: Richard Davey [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 24, 2004 2:01 PM
To: [EMAIL PROTECTED]
Subject: Re[2]: Index not functioning


Hello Jack,

Wednesday, March 24, 2004, 6:50:45 PM, you wrote:

JC I tried using 'USE INDEX' and 'BETWEEN' but it didn't change anything.

JC EXPLAIN SELECT counter, UNIX_TIMESTAMP(dtime) FROM ifInOctets_137
JC USE INDEX (dtime) WHERE id=2809 AND dtime BETWEEN
FROM_UNIXTIME(107397)
JC AND FROM_UNIXTIME(1076734799) ORDER BY dtime;

Does your index include both id and dtime in a single index? If not,
it probably ought to if the above is a typical query you need to run
on this table.

-- 
Best regards,
 Richard Davey
 http://www.phpcommunity.org/wiki/296.html



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


$sth-rows problem

2003-09-08 Thread Jack Coxen
I've cobbled together a short PERL script to delete older records from the
tables in a database.  The DELETE works fine but when I try to put in a
statement to return the number of rows deleted, I get an error.

The code reads as follows:

sub run_query {
#  print Query: , $statement, \n;
  my $sth = $dbh-do($statement)
or die Can't prepare $statement: $dbh-errstr\n;
  $numrows = ($sth-rows); # This is where the problem is
  print $numrows;
}

$dbh= DBI-connect(DBI:mysql:$db:host=$host, $user, $pass);

$statement=SELECT rid FROM router;
#  print Query: , $statement, \n;
$sth = $dbh-prepare($statement)
  or die Can't prepare $statement: $dbh-errstr\n;
$rv = $sth-execute
  or die Can't execute the query: $sth-errstr\n;
while (@row = $sth-fetchrow_array ())
{
  push (@rid, $row[0]);
}

foreach $rid (@rid) {

  $statement=DELETE FROM ifInOctets_$rid WHERE dtime  '$deldate
$deltime';
  run_query($statement);
  print $statement\t$numrows\n;

  $statement=DELETE FROM ifOutOctets_$rid  WHERE dtime  '$deldate
$deltime';
  run_query($statement);
  print $statement\n;

  $statement=DELETE FROM ifInErrors_$rid WHERE dtime  '$deldate
$deltime';
  run_query($statement);
  print $statement\n;

  $statement=DELETE FROM ifInUcastPkts_$rid  WHERE dtime  '$deldate
$deltime';
  run_query($statement);
  print $statement\n;

  $statement=DELETE FROM ifOutUcastPkts_$rid WHERE dtime  '$deldate
$deltime';
  run_query($statement);
  print $statement\n\n;

}
The error message is:

Can't call method rows without a package or object reference at
./cleandb.pl line 34.

Could someone please kindly point out the glaringly obvious thing that I'm
missing?

Thanks,

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: Delete questions and speed/safety issues

2003-08-21 Thread Jack Coxen
Switching to another database isn't really an option.  I didn't write the
package and I'm not good enough to port it to another database or to rewrite
it for a multiple machine architecture.

Probably the only non-RAID option I have (assuming I want to keep more than
3 months worth of data) would be to add another couple of drives to the
server and then split the tables among them with links back to the original
database directory - sort of 'poor man's RAID'.  Drives I can get...it's the
RAID controller and enclosure that is the sticking point.

Jack
-Original Message-
From: Michael S. Fischer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 20, 2003 1:43 PM
To: 'Jack Coxen'; 'MySQL List (E-mail)'
Subject: RE: Delete questions and speed/safety issues


It's quite possible you're using the wrong tool for the job.  Since this
is a write-intensive environment, you may get better performance by
using another database such as PostgreSQL or Oracle.  Alternatively,
consider the option of re-architecting the application to distribute the
writes across multiple machines, each with its own small disk and
running its own instance of MySQL.  You need not necessarily have a big
RAID array to scale effectively, and sometimes the small soldiers
approach is more cost-effective.

In order to safely run myisamchk on a table, mysqld must be shut down,
or, alternatively, you must find some way to guarantee that the table is
not presently open by mysqld and that mysqld will not try to open the
files corresponding to the table while the check is in progress.

--Michael

 -Original Message-
 From: Jack Coxen [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, August 20, 2003 6:07 AM
 To: MySQL List (E-mail)
 Subject: Delete questions and speed/safety issues
 
 
 I'm in the process of writing a Perl script to delete records 
 out of a database when the records are more than 90 days old. 
  I'm running MySQL version 4.0.12-standard.  The db consists 
 of around 620 MyISAM tables with the following structure:
 
 CREATE TABLE `ifInErrors_2` (
   `id` int(11) NOT NULL default '0',
   `dtime` datetime NOT NULL default '-00-00 00:00:00',
   `counter` bigint(20) NOT NULL default '0',
   KEY `ifInErrors_2_idx` (`dtime`)
 ) TYPE=MyISAM;
 
 The tables range in size from 1-2 rows upwards to over 
 9,200,000 with an average of around 570,000 rows.  Updates to 
 this database run constantly (it records information from a 
 program that monitors the routers and switches on my 
 company's WAN) at a rate of around 2600 inserts/minute.
 
 I had originally planned to use syntax similar to:
 
 DELETE * FROM table_name WHERE `dtime`  [90 days ago]
 
 After the DELETE runs, I plan on running MYISAMCHK on the 
 affected table. Then I'll repeat both steps for all of the 
 other tables in turn.
 
 Does anyone have any suggestions for alternatives or is there 
 anything I'm missing here?  I'm new to this stuff and may be 
 way off base here.  If so, please tell me.
 
 One last thing.  I'm running out of Drive space and am I/O 
 bound - I'm writing this script in case I can't get the RAID 
 array I'm hoping for. Because of the I/O problem, execution 
 time can sometimes be a factor in what I do.  Execution speed 
 is a primary concern.  If this takes a day or two to run but 
 the application can be running at the same time then there's 
 no problem.  But if I have to shut down my application for 
 any appreciable length I time, then I have to find another 
 way of doing this.
 
 Thanks,
 
 Jack
 
 Jack Coxen
 IP Network Engineer
 TelCove
 712 North Main Street
 Coudersport, PA 16915
 814-260-2705
 
 


Delete questions and speed/safety issues

2003-08-20 Thread Jack Coxen
I'm in the process of writing a Perl script to delete records out of a
database when the records are more than 90 days old.  I'm running MySQL
version 4.0.12-standard.  The db consists of around 620 MyISAM tables with
the following structure:

CREATE TABLE `ifInErrors_2` (
  `id` int(11) NOT NULL default '0',
  `dtime` datetime NOT NULL default '-00-00 00:00:00',
  `counter` bigint(20) NOT NULL default '0',
  KEY `ifInErrors_2_idx` (`dtime`)
) TYPE=MyISAM;

The tables range in size from 1-2 rows upwards to over 9,200,000 with an
average of around 570,000 rows.  Updates to this database run constantly (it
records information from a program that monitors the routers and switches on
my company's WAN) at a rate of around 2600 inserts/minute.

I had originally planned to use syntax similar to:

DELETE * FROM table_name WHERE `dtime`  [90 days ago]

After the DELETE runs, I plan on running MYISAMCHK on the affected table.
Then I'll repeat both steps for all of the other tables in turn.

Does anyone have any suggestions for alternatives or is there anything I'm
missing here?  I'm new to this stuff and may be way off base here.  If so,
please tell me.

One last thing.  I'm running out of Drive space and am I/O bound - I'm
writing this script in case I can't get the RAID array I'm hoping for.
Because of the I/O problem, execution time can sometimes be a factor in what
I do.  Execution speed is a primary concern.  If this takes a day or two to
run but the application can be running at the same time then there's no
problem.  But if I have to shut down my application for any appreciable
length I time, then I have to find another way of doing this.

Thanks,

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705



RE: Really stupid Question...

2003-08-01 Thread Jack Coxen
A shell prompt is the Unix/Linux equivalent of the DOS prompt.  Since you
have the choice of several different operating systems shells to work in
(Bourne shell - sh, Korn shell - ksh, C shell - csh, Bourne Again shell -
bash (my favorite), etc) the command line prompts are generically referred
to as shell prompts.

Jack

-Original Message-
From: Ola Ogunneye [mailto:[EMAIL PROTECTED]
Sent: Friday, August 01, 2003 8:28 AM
To: [EMAIL PROTECTED]
Subject: Really stupid Question...


I am a Windows User and very used to the Dos Prompt. Can someone please
tell me what the Shell Prompt is?

What I am asking is the difference between a shell and a dos prompt
vis-a-vis mysql. 

Thank you.

Ola


Performance questions

2003-08-01 Thread Jack Coxen
I've gone through the mailing list archives, read the appropriate parts of
the manual and searched the internet and still have questions.  What I'm
running into is a serious performance problem with a fairly large (by my
standards) database.  My system setup is as follows:

Dual processor Sun E250 w/1.5GB RAM running Solaris 8 (with all current
patches)
18GB dedicated drive/17GB data partition for the database

I'm running MySQL 4.0.12 standard with the Large my.cnf in place.  I'm not
running the Huge my.cnf because I'm running other applications on the
server.

Other software running on the server consists of:
Apache
120 iterations of MRTG - one for each of the devices I have to monitor.  The
MRTG RRD databases are on a separate drive from MySQL and MRTG doesn't
contribute seriously to the system load.
Various shell scripts and Perl scripts triggered by cron to automate damn
near everything. :)

My db application is a program called RTG - it's a replacement for MRTG that
uses MySQL on the backend - http://rtg.sourceforge.net.  BTW, both of these
applications (if you're not familiar with them) are designed to monitor
traffic through routers and switches.  MRTG uses a program called RRD (Round
Robin Database) on the backend that does data averaging for any data over 24
hour old.  RTG scales better for large networks and doesn't do data
averaging.  It keeps live data to the limit of your drive space.

Since starting up RTG not quite 3 months ago, my database has grown to over
10GB in size.  If I can get the drive space to keep the 2 years worth of
data I want to, my database will be edging upward toward 100GB.  There are
currently 682 tables ranging in size from 2 KB/1 record to around 310
MB/7570511 records depending on the number of ports on a router and how much
traffic flows through it.  

I'm running around 2500 queries/minute - 99+% of them inserts.  My cpu load
is minimal - around .40 nominally but it may go as high as 1.80 or so when
handling multiple large retrievals.  If I look at the iostats output for my
server, the drive controller bandwidth utilization is around 30-40% during
normal operation but immediately jumps to 100% utilization during retrieves.
Retrieval is done via Perl scripts or from Perl/PHP WebPages.

I'm trying to get a RAID array loaded with striped drives to hang off the
server but until I can talk someone into signing the check, I need to do
anything I can to improve performance.

Does anyone have any suggestions?  Preferably ones that don't involve money?
If it would help, I _might_ be able to coerce the IT group into giving me
more RAM but an E250 will only hold 2GB RAM and I'm already at 1.5GMB.  I've
modified the RTG table structure to use different indexes and modified the
Perl/PHP scripts to suit those changes and gotten significant performance
boosts from that.  What I really need are system tweaks and configuration
changes to improve performance.  

Jack

Jack Coxen
IP Network Engineer
TelCove
712 North Main Street
Coudersport, PA 16915
814-260-2705