Re: MySQL Secure Connection(e.g. SSL) Question

2004-08-25 Thread Issac Goldstand
This all strikes me as somewhat odd, since there is a --ssl flag which one
can use to build DBD::mysql with SSL support.  Unfortunately, I've never
personally figured out how to actually utilize it, but maybe others are more
knowledgeable...

  Issac

- Original Message - 
From: "Joshua J. Kugler" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 8:25 PM
Subject: Re: MySQL Secure Connection(e.g. SSL) Question


> Something else to check out is Stunnel.  It creates SSL tunnels between
hosts
> without requiring logins (basically port redirection).  Also, MySQL has
built
> in SSL now, so you might want to look at that.
>
> j- k-
>
> On Wednesday 25 August 2004 08:10 am, [EMAIL PROTECTED] said something
like:
> > Something else I have done in the past was to use Cygwin to create an
SSH
> > session with the remote computer and use the remote computer's MySQL
> > client/tools.
> >
> > If you are used to working in a "Terminal Server" session (I think they
> > now call it "Remote Desktop Connectivity") it will feel very familiar.
I
> > also fiddled around with Cygwin long enough and hard enough (translate:
I
> > spent lots of time reading groups and docs and experimenting) to enable
me
> > to create local X windows so that I could run a full GUI shell (GNOME or
> > KDE, I can't remember) remotely. Please don't ask me how I got it all
set
> > up as I only did it once and it was a long while ago. I know it's
possible
> > because I did it and it worked well for my situation.
> >
> > FWIW,
> > Shawn Green
> > Database Administrator
> > Unimin Corporation - Spruce Pine
> >
> > "Paul Maine" <[EMAIL PROTECTED]> wrote on 08/24/2004 12:03:19 PM:
> > > I need to connect to a remote MySQL database from a PC using SSL. I
> >
> > would
> >
> > > prefer to connect using perl DBD. Does anyone have a suggestion how I
> >
> > can
> >
> > > accomplish this task or an alternative solution?
> > >
> > > Thank You
> > >
> > >
> > > --
> > > MySQL General Mailing List
> > > For list archives: http://lists.mysql.com/mysql
> > > To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>
> -- 
> Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
> Every knee shall bow, and every tongue confess, in heaven, on earth, and
under
> the earth, that Jesus Christ is LORD -- Count on it!
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mySQL beginner question

2004-08-25 Thread B Wiley Snyder
Hello, hope this is the right list
I created a table with the following code
CREATE TABLE jspCart_products (
ProductID int primary key,
CategoryID int,
ModelNumber varChar(75),
ModelName varChar(250),
ProductImage varchar(250),
UnitCost decimal(9,2),
Description BLOB,
);
The CategoryID has duplicate entrys. When I use SELECT * FROM  
I see the categories just fine but when they are sent to an html page they 
are null values. Would that be a screw-up in my code or is it becuase I 
need to specify when I initially build the table like above that it uses 
duplicates? i hope that makes sense and thanks for replys in advance.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql-plain Digest 25 Aug 2004 19:36:13 -0000 Issue 3063

2004-08-25 Thread Michael Stassen
[EMAIL PROTECTED] wrote:
The anonymous account IS a security risk and should be deleted as soon as 
you are capable of managing "real" logins. It's basically there to let 
newbie administrators get up to speed with as few problems as possible. 
Now that you are comfortable with the MySQL privileges, dump it and hope 
it never comes back... ;-)

http://dev.mysql.com/doc/mysql/en/Default_privileges.html
I agree wholeheartedly, but it should be noted that some tools which work 
with mysql have tests during installation which, at least by default, try to 
connect as the anonymous user to use the test db.  The test script for 
Perl's DBD::mysql, for example, does this, which makes installing it through 
CPAN a pain.  So, the one drawback to deleting the anonymous user is that 
doing so adds a wrinkle to some 3rd party tool installations.  Every one of 
these that I've come across, though, has a way to supply a real user and 
password to use in place of the anonymous user, so you shouldn't let that 
stop you from deleting it.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Can append word into varchar column in Update statement?

2004-08-25 Thread Michael Stassen
Monet wants to append something to the current value, so I think CONCAT() is 
needed:

  UPDATE temp
  SET ReviewComments = CONCAT(ReviewComments, ', WHC')
  WHERE ... ;
Michael
[EMAIL PROTECTED] wrote:
Yes, UPDATE ... SET... is exactly what you would use. Just set the column 
to be the combined value of the two values.
Please read: http://dev.mysql.com/doc/mysql/en/UPDATE.html

UPDATE tablename
SET fieldname = 'new list of values that you wanted'
WHERE _where_conditions_
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Monet <[EMAIL PROTECTED]> wrote on 08/24/2004 07:49:37 PM:

Hi all,
I though it is impossible to do that but I like check
with you guys in case it is just because I never heard
it.
I have a table with a text column. Is there any way I
can append some word into this field when I update the
table?
For instance,
Table temp, column ReviewComments, data type of
ReviewComments is varchar.
Value in  âReviewCommentsâ is âPSRCâ. 
After update, value in âReviewCommentsâ should be
âPSRC, WHCâ

Can I do that by using UPDATE â SET â. 

Thanks a lot.
Monet

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Two indexes on same column. Crazy ?

2004-08-25 Thread Dan Nelson
In the last episode (Aug 26), Alistair Gusmao said:
> I inherited a database with all MyIsam tables. In one of the tables,
> there was two indexes on the same column. Everything about the two
> indexes is the same (except the index name of course!). I think this
> is crazy ? What is your opinion ? Should I remove one of them ?

If they're identical, then yes, you're just wasting space and CPU.


-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Two indexes on same column. Crazy ?

2004-08-25 Thread Alistair Gusmao
I inherited a database with all MyIsam tables.
In one of the tables, there was two indexes on the same column.
Everything about the two indexes is the same (except the index name of course!).
I think this is crazy ? What is your opinion ? Should I remove one of them ?

Thanks

Alistair


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: need help with a query

2004-08-25 Thread Redmond Militante
awesome.  thank you!


[Wed, Aug 25, 2004 at 06:27:38PM -0700]
This one time, at band camp, David Perron said:

> 
> Are you assuming that all months have 30 days?  You can use the same syntax
> with INTERVAL 1 MONTH
> I would also format the date comparison to use the same precision that the
> DATE_ADD function outputs.
> 
> So,
> 
> DATE_ADD(table.date, INTERVAL 1 MONTH) <= DATE_FORMAT(CURDATE(), '%Y-%m-%d')
> 
> I might be off on the 1 MONTH part, but I think it's the right track.
> 
> -Original Message-
> From: Redmond Militante [mailto:[EMAIL PROTECTED] 
> Sent: Wednesday, August 25, 2004 6:03 PM
> To: [EMAIL PROTECTED]
> Subject: need help with a query
> 
> hi
> 
> i need advice on a query i'm trying to do.  i'm trying to find entries with
> that are about to expire.  entries expire if their date of submission is
> older than 60 days.  i want to find all entries with a date of submission
> greater than 30 days, or those that are going to expire within a month 
> 
> i'm using this as part of my query to find entries that are about to expire:
> ' ... DATE_ADD($dbtable3.savedemployers2, INTERVAL 30 DAY) <=NOW()' 
> 
> am i doing this the right way?
> 
> thanks
> redmond
> 
> --
> Redmond Militante
> Software Engineer / Medill School of Journalism FreeBSD 5.2.1-RELEASE-p9 #0:
> Thu Jul 1 14:36:26 CDT 2004 i386  8:00PM  up 28 days,  3:36, 4 users, load
> averages: 0.00, 0.29, 0.51
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 

-- 
Redmond Militante
Software Engineer / Medill School of Journalism
FreeBSD 5.2.1-RELEASE-p9 #0: Thu Jul 1 14:36:26 CDT 2004 i386
11:00PM  up 28 days,  6:36, 3 users, load averages: 0.22, 0.18, 0.30


pgpC8omL1KwJU.pgp
Description: PGP signature


Re: need help with a query

2004-08-25 Thread Michael Stassen
David Perron wrote:
Are you assuming that all months have 30 days?  You can use the same syntax
with INTERVAL 1 MONTH
True.
I would also format the date comparison to use the same precision that the
DATE_ADD function outputs.
Why would you do that?  The date column contains a DATE.  CURDATE() returns 
a DATE.  DATE_ADD() returns a DATE.  Comparing DATEs is straightforward. 
DATE_FORMAT returns a string, however, so your query compares a DATE to a 
string.  At best, the difference is optimized away.  At worst, an extra 
conversion takes place.  Formatting is for presentation, not comparison.

So,
DATE_ADD(table.date, INTERVAL 1 MONTH) <= DATE_FORMAT(CURDATE(), '%Y-%m-%d')
I might be off on the 1 MONTH part, but I think it's the right track.
Again, compare columns to (functions of) constants.  Do not run the column 
through a function, if at all possible.  Thus, this should be

  table.date <= CURDATE() - INTERVAL 1 MONTH
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: need help with a query

2004-08-25 Thread Michael Stassen
Redmond Militante wrote:
hi
i need advice on a query i'm trying to do.  i'm trying to find
entries with that are about to expire.  entries expire if their date
of submission is older than 60 days.  i want to find all entries with
a date of submission greater than 30 days, or those that are going to
expire within a month
i'm using this as part of my query to find entries that are about to
expire: ' ... DATE_ADD($dbtable3.savedemployers2, INTERVAL 30 DAY)
<=NOW()' 

am i doing this the right way?
thanks
redmond
Almost.  First, you say that your column is a DATE, but you are comparing to 
NOW(), which returns a DATETIME.  That's OK (mysql will convert), but it 
would be slightly better to compare to CURDATE(), as it returns a DATE.

More importantly, you do not want to compare a function of a column to a 
constant, because then an index on the column is of no use, forcing a full 
table scan.  If at all possible, move the function to the constant side of 
the comparison, as a function of a constant is a constant.

In other words,
  datecol + 30 days <= today
is equivalent to
  datecol <= today - 30 days
but the former forces a table scan while the latter could use an index on 
datecol.

So, in your case, you should use
  WHERE ... $dbtable3.savedemployers2 <= CURDATE() - INTERVAL 30 DAY
Finally, note that this would also return entries which have already 
expired, if they are still there.  To leave those out, change the condition to

  $dbtable3.savedemployers2 BETWEEN CURDATE() - INTERVAL 60 DAY
AND CURDATE() - INTERVAL 30 DAY
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Prepared Statement questions/issues

2004-08-25 Thread Leonardus Setiabudi
Hi Ken..

this might not solve your problem..
but, did you really have to do this

create table raddata (
 granIDinteger not null,
 scanIDtinyint unsigned not null,
 fpID  tinyint not null,
 c1smallint,
 c2smallint,
 c3smallint,
  [and so on and so forth]
 c2376 smallint,
 c2377 smallint,
 c2378 smallint,
 primary key (fpID, scanID, granID)
);

just looking your table definition, i can imagine how long it will
take to insert data..
couldnt you break it down to two table??
like 

create table raddata (
 granIDinteger not null,
 scanIDtinyint unsigned not null,
 fpID  tinyint not null,
 primary key (fpID, scanID, granID)
);

create table radcolumns (
 granIDinteger not null,
 scanIDtinyint unsigned not null,
 fpID  tinyint not null,
 cID   smallint,
 cValue smallint,
 key (fpID, scanID, granID)
);

maybe this can help


On Thu, 26 Aug 2004 08:11:53 +1000, Daniel Kasak
<[EMAIL PROTECTED]> wrote:
> Ken Gieselman wrote:
> 
> >Hi Folks --
> >
> >I'm running into some issues with large prepared statements, and would love some
> >feedback about approaches or tweaks that I might have overlooked :)
> >
> >
> >
> >
> 
> 
> >Any suggestions as to how to do this more efficiently/faster?  Or how to
> >increase the buffer space for the prepared statements, so as to do the inserts
> >in larger groups?
> >
> >
> I can't comment on prepared statements - I've never used them. I'm
> running a 4.0.x server here.
> But I do know that one of the fastest ways to get data into MySQL is
> with 'load data infile'.
> Maybe you export your data to a csv file and import it that way? Line up
> each 1000 inserts in a text file and then run 'load data infile' or
> something like that?
> 
> It's not the cleanest solution, granted, but if you need to to work
> better in a hurry, this might be your ticket ( at least until you figure
> out what's up with your prepared statements ).
> 
> --
> Daniel Kasak
> IT Developer
> NUS Consulting Group
> Level 5, 77 Pacific Highway
> North Sydney, NSW, Australia 2060
> T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
> email: [EMAIL PROTECTED]
> website: http://www.nusconsulting.com.au
> 
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: crosstabs and pivot tables

2004-08-25 Thread Leonardus Setiabudi
hi there

this might be an extreme solution.. but i thought you could come up
with just one query instead of 20 temp tables.. if you dont mind to
write it :)

select id,
max(if(datetype='entered_date',date,'')) entered_date,
max(if(datetype='modified_date',date,'')) modified_date,
max(if(datetype='application_date',date,'')) application_date,
.
.
.
from yourtable
group by id
order by id

HTH
Leo




On Wed, 25 Aug 2004 16:16:05 -0700, Donna Hinshaw
<[EMAIL PROTECTED]> wrote:
> Shawn -
>
> Maybe terminology here...but I think of a "crosstab" query as one which
> yields sums or averages or
> some such tabulation.
>
> What I need to do is just "pivot" from rows to columns.
>
> The way I've come up with is this:
>
> ORIGINAL TABLE:
> columns  =  id   date   datetype
> there are about 20 datetypes (eg. entered_date, modified_date,
> application_date)
> there can be up to 20 rows for each ID value
>
> DESIRED TABLE:
> columns =   id   entered_date   modified_date   application_date 
> so there is just one row for each ID value
>
> SOLUTION:
> create temporary tables for each of the date types
> columns for TEMP1 = id  entered_date
> columns for TEMP2 = id modified_date
> columns for TEMP3 = id application_date
> . so on for all date types
>
> then, do an insert into the DESIRED TABLE for all columns,
> as select (  ... join all 20 tables on the ID value )
>
> Since there are over 23000 ID values, and over 20 date types,
> this join on all 20 TEMP tables will be an overnight batch process,
> which is fine for the users - this DESIRED TABLE is for reporting
> somewhat after the fact, and one day "out-of-synch" is fine.
>
> Any other suggestions?
>
> Donna
>
> [EMAIL PROTECTED] wrote:
>
> >I still see what you want as a crosstab query. The only difference, as you
> >say very well, is that you want to pivot on the "date type" values and not
> >the ID values.
> >
> >The only other thing you need to decide in order to make a crosstab report
> >is "what information goes in the position for the row ID=x  and column
> >datetype=y". Do you want to see the SUM of some value for each of the rows
> >with that ID value and Date type? Or the average or maximum or minimum or
> >the standard deviation for the set.  I am asking you how do you want to
> >calculate each of the values of x, y, z, and w as you listed them in your
> >example output?
> >
> >It's the same pattern as the other crosstab queries (aka pivot tables) but
> >you have to tell me which column you want to calculate values from and
> >which calculation to use before I can give you an example using your data.
> >
> >Shawn Green
> >Database Administrator
> >Unimin Corporation - Spruce Pine
> >
> >Donna Hinshaw <[EMAIL PROTECTED]> wrote on 08/20/2004 03:13:16
> >PM:
> >
> >
> >
> >>Hi - I've read the threads about converting rows of data into columns,
> >>but those threads assume the number of distinct rows is very limited
> >>(say 7 for days of week).
> >>
> >>Instead, I have a table like this:
> >>
> >>iddate   date type
> >>1 ...  a
> >>1 ...  b
> >>
> >>2   a
> >>2    d
> >>
> >>Where the number of distinct id values is in the 100,000 range
> >>but the distinct date types are limited to about 20.
> >>
> >>I want to get a table (not a view) like this:
> >>
> >>iddate a   date b  date d   ..
> >>1  x   y null
> >>2  z  null   w
> >> (based on the values in the first table above)
> >>
> >>So...I can't think how to do this.  Help would be appreciated.
> >>
> >>TIA.
> >>dmh
> >>
> >>
> >>
> >>--
> >>MySQL General Mailing List
> >>For list archives: http://lists.mysql.com/mysql
> >>To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> >>
> >>
> >>
> >
> >
> >
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: need help with a query

2004-08-25 Thread David Perron

Are you assuming that all months have 30 days?  You can use the same syntax
with INTERVAL 1 MONTH
I would also format the date comparison to use the same precision that the
DATE_ADD function outputs.

So,

DATE_ADD(table.date, INTERVAL 1 MONTH) <= DATE_FORMAT(CURDATE(), '%Y-%m-%d')

I might be off on the 1 MONTH part, but I think it's the right track.

-Original Message-
From: Redmond Militante [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 25, 2004 6:03 PM
To: [EMAIL PROTECTED]
Subject: need help with a query

hi

i need advice on a query i'm trying to do.  i'm trying to find entries with
that are about to expire.  entries expire if their date of submission is
older than 60 days.  i want to find all entries with a date of submission
greater than 30 days, or those that are going to expire within a month 

i'm using this as part of my query to find entries that are about to expire:
' ... DATE_ADD($dbtable3.savedemployers2, INTERVAL 30 DAY) <=NOW()' 

am i doing this the right way?

thanks
redmond

--
Redmond Militante
Software Engineer / Medill School of Journalism FreeBSD 5.2.1-RELEASE-p9 #0:
Thu Jul 1 14:36:26 CDT 2004 i386  8:00PM  up 28 days,  3:36, 4 users, load
averages: 0.00, 0.29, 0.51


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



need help with a query

2004-08-25 Thread Redmond Militante
hi

i need advice on a query i'm trying to do.  i'm trying to find entries with that are 
about to expire.  entries expire if their date of submission is older than 60 days.  i 
want to find all entries with a date of submission greater than 30 days, or those that 
are going to expire within a month 

i'm using this as part of my query to find entries that are about to expire: ' ... 
DATE_ADD($dbtable3.savedemployers2, INTERVAL 30 DAY) <=NOW()' 

am i doing this the right way?

thanks
redmond

-- 
Redmond Militante
Software Engineer / Medill School of Journalism
FreeBSD 5.2.1-RELEASE-p9 #0: Thu Jul 1 14:36:26 CDT 2004 i386
 8:00PM  up 28 days,  3:36, 4 users, load averages: 0.00, 0.29, 0.51


pgpAimRBp7GPQ.pgp
Description: PGP signature


Re: Problems with Mysql database and PHPAdmin

2004-08-25 Thread Rusty Wright
Creating a database is a separate thing from adding permissions for a
database to the grant tables.  When you create a database it's a "free
floating" database that nobody (except root) has permission to access
until you issue the grant statements to make someone "own" it.  But
nobody actually owns the database, they just have permission to access
and use it.

Conversely, you could issue grant statements that allow someone to
access and use a database that doesn't exist.  You could then later at
your leisure create the database.

Nothing's automatically added to the grant tables when you create a
database, and no database is automatically created when you add
entries for a nonexistent one to the grant tables.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question about SQL statements......

2004-08-25 Thread David Souza
You rock!!  Worked perfectly
On Aug 25, 2004, at 4:31 PM, Rhino wrote:
- Original Message -
From: "David Souza" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 7:18 PM
Subject: Question about SQL statements..

So I have all of this years apache access logs in a huge DB table,  
most
of what I wanted to get out is working fine, but I am trying to get  
top
10 requests (hits) by day inside a certain section of the hierarchy,
but I can't seem to get the correct results.

I have been trying many different ways of specifying the SQL  
statement,
so here is what I have so far:

I have hits to a specific page, by week working fine here:
mysql> SELECT week(time_stamp, 1) as week, request_uri, count(*) as  
hits
 -> FROM `access_log`
 -> WHERE (request_uri="/testing/folder/foldersetup.exe") AND
(time_stamp > 2004040100 AND time_stamp < 2004082300)
 -> GROUP BY week
 -> ORDER BY week ASC, hits DESC
 -> LIMIT 20;
+--+-+---+
| week | request_uri | hits  |
+--+-+---+
|   27 | /testing/folder/folderSetup.exe | 20069 |
|   28 | /testing/folder/folderSetup.exe |  3785 |
|   29 | /testing/folder/folderSetup.exe |  1607 |
|   30 | /testing/folder/folderSetup.exe |   878 |
|   31 | /testing/folder/folderSetup.exe |   858 |
|   32 | /testing/folder/folderSetup.exe |   779 |
|   33 | /testing/folder/folderSetup.exe |   721 |
|   34 | /testing/folder/folderSetup.exe |  1015 |
+--+-+---+
8 rows in set (0.85 sec)

but when I try to get top hits with a like statement by day (only 2
days) I get:
mysql> SELECT day(time_stamp) as day, request_uri, count(DISTINCT
request_uri) as hits
 -> FROM `access_log`
 -> WHERE (time_stamp > 2004050300 AND time_stamp <
2004050500) AND (request_uri LIKE "/docs/Carb/%")
 -> GROUP BY day
 -> ORDER BY day DESC, hits DESC
 -> LIMIT 20;
+--+---+--+
| day  | request_uri   | hits |
+--+---+--+
|4 | /docs/Carb/index.html | 7075 |
|3 | /docs/Carb/Conceptual/index.html  | 6979 |
+--+---+--+
2 rows in set (39.68 sec)
which seems to give me all the hits within /docs/Carb/ on the two days
and then displays the last request_uri that came through. Maybe I
cannot do this with one SQL statement or I just need more logic.
Ideally what I would want would be something like:
+-
+- 
--
+--+
| day | request_uri
   | hits |
+-
+- 
--
+--+
|  4  | /docs/Carb/index.html
   | 1063 |
|  4  | /docs/Carb/Conceptual/test_system/new_test_system.html
   |  930 |
|  4  | /docs/Carb/Conceptual/test_system/toc.html
   |  921 |
|  4  | /docs/Carb/Conceptual/test_system/
   |  799 |
|  4  | /docs/Carb/Conceptual/Testing_book/toc.html
   |  483 |
|  4  |
/docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html  
|
  479 |
|  4  | /docs/Carb/Conceptual/Testing_book/index.html
   |  433 |
|  4  |
/docs/Carb/Conceptual/newQuickTour/qt_intro/chapter_1_section_1.html   
|
  425 |
|  4  | /docs/Carb/date.html
   |  421 |
|  4  | /docs/Carb/Conceptual/newQuickTour/toc.html
   |  375 |
|  3  | /docs/Carb/index.html
   | 1063 |
|  3  | /docs/Carb/Conceptual/test_system/new_test_system.html
   |  930 |
|  3  | /docs/Carb/Conceptual/test_system/toc.html
   |  921 |
|  3  | /docs/Carb/Conceptual/test_system/
   |  799 |
|  3  | /docs/Carb/Conceptual/Testing_book/toc.html
   |  483 |
|  3  |
/docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html  
|
  479 |
|  3  | /docs/Carb/Conceptual/Testing_book/index.html
   |  433 |
|  3  | /docs/Carb/Conceptual/newQuickTour/chapter_1_section_1.html
   |  425 |
|  3  | /docs/Carb/date.html
   |  421 |
|  3  | /docs/Carb/Conceptual/newQuickTour/toc.html
   |  375 |
+-
+- 
--
+--+

Any thoughts/help would be much appreciated!!!
Change your GROUP BY to this:
GROUP BY day, request_uri
The rest of the query should be fine the way it is.
Rhino

**
David Souza voice:408-974-1992
Web Specialist http://developer.apple.com/
Worldwide Developer Relations  mailto:[EMAIL PROTECTED]
**


FLUSH TABLES WITH READ LOCK

2004-08-25 Thread Kato Haws
In the " 4.0.20-standard-log" version of mysql, will the " FLUSH TABLES WITH READ 
LOCK" statement work to quiesce a database consisting of Innodb tables?

Thanks,

---
Kato D. Haws
Arizona State University;   Database Administration
---
El respeto al derecho ajeno es la paz. - Benito Juárez.
(Respect for the rights of others is peace).
---



Re: Question about SQL statements......

2004-08-25 Thread Rhino

- Original Message - 
From: "David Souza" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 7:18 PM
Subject: Question about SQL statements..


> So I have all of this years apache access logs in a huge DB table, most  
> of what I wanted to get out is working fine, but I am trying to get top  
> 10 requests (hits) by day inside a certain section of the hierarchy,  
> but I can't seem to get the correct results.
> 
> I have been trying many different ways of specifying the SQL statement,  
> so here is what I have so far:
> 
> I have hits to a specific page, by week working fine here:
> 
> mysql> SELECT week(time_stamp, 1) as week, request_uri, count(*) as hits
>  -> FROM `access_log`
>  -> WHERE (request_uri="/testing/folder/foldersetup.exe") AND  
> (time_stamp > 2004040100 AND time_stamp < 2004082300)
>  -> GROUP BY week
>  -> ORDER BY week ASC, hits DESC
>  -> LIMIT 20;
> +--+-+---+
> | week | request_uri | hits  |
> +--+-+---+
> |   27 | /testing/folder/folderSetup.exe | 20069 |
> |   28 | /testing/folder/folderSetup.exe |  3785 |
> |   29 | /testing/folder/folderSetup.exe |  1607 |
> |   30 | /testing/folder/folderSetup.exe |   878 |
> |   31 | /testing/folder/folderSetup.exe |   858 |
> |   32 | /testing/folder/folderSetup.exe |   779 |
> |   33 | /testing/folder/folderSetup.exe |   721 |
> |   34 | /testing/folder/folderSetup.exe |  1015 |
> +--+-+---+
> 8 rows in set (0.85 sec)
> 
> but when I try to get top hits with a like statement by day (only 2  
> days) I get:
> 
> mysql> SELECT day(time_stamp) as day, request_uri, count(DISTINCT  
> request_uri) as hits
>  -> FROM `access_log`
>  -> WHERE (time_stamp > 2004050300 AND time_stamp <  
> 2004050500) AND (request_uri LIKE "/docs/Carb/%")
>  -> GROUP BY day
>  -> ORDER BY day DESC, hits DESC
>  -> LIMIT 20;
> +--+---+--+
> | day  | request_uri   | hits |
> +--+---+--+
> |4 | /docs/Carb/index.html | 7075 |
> |3 | /docs/Carb/Conceptual/index.html  | 6979 |
> +--+---+--+
> 2 rows in set (39.68 sec)
> 
> which seems to give me all the hits within /docs/Carb/ on the two days  
> and then displays the last request_uri that came through. Maybe I  
> cannot do this with one SQL statement or I just need more logic.  
> Ideally what I would want would be something like:
> 
> +- 
> +--- 
> +--+
> | day | request_uri  
>| hits |
> +- 
> +--- 
> +--+
> |  4  | /docs/Carb/index.html
>| 1063 |
> |  4  | /docs/Carb/Conceptual/test_system/new_test_system.html   
>|  930 |
> |  4  | /docs/Carb/Conceptual/test_system/toc.html   
>|  921 |
> |  4  | /docs/Carb/Conceptual/test_system/   
>|  799 |
> |  4  | /docs/Carb/Conceptual/Testing_book/toc.html  
>|  483 |
> |  4  |  
> /docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html |  
>   479 |
> |  4  | /docs/Carb/Conceptual/Testing_book/index.html
>|  433 |
> |  4  |  
> /docs/Carb/Conceptual/newQuickTour/qt_intro/chapter_1_section_1.html  |  
>   425 |
> |  4  | /docs/Carb/date.html 
>|  421 |
> |  4  | /docs/Carb/Conceptual/newQuickTour/toc.html  
>|  375 |
> |  3  | /docs/Carb/index.html
>| 1063 |
> |  3  | /docs/Carb/Conceptual/test_system/new_test_system.html   
>|  930 |
> |  3  | /docs/Carb/Conceptual/test_system/toc.html   
>|  921 |
> |  3  | /docs/Carb/Conceptual/test_system/   
>|  799 |
> |  3  | /docs/Carb/Conceptual/Testing_book/toc.html  
>|  483 |
> |  3  |  
> /docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html |  
>   479 |
> |  3  | /docs/Carb/Conceptual/Testing_book/index.html
>|  433 |
> |  3  | /docs/Carb/Conceptual/newQuickTour/chapter_1_section_1.html  
>|  425 |
> |  3  | /docs/Carb/date.html 
>|  421 |
> |  3  | /docs/Carb/Conceptual/newQuickTour/toc.html  
>|  375 |
> +- 
> +--- 
> +--+
> 
> Any thought

Question about SQL statements......

2004-08-25 Thread David Souza
So I have all of this years apache access logs in a huge DB table, most  
of what I wanted to get out is working fine, but I am trying to get top  
10 requests (hits) by day inside a certain section of the hierarchy,  
but I can't seem to get the correct results.

I have been trying many different ways of specifying the SQL statement,  
so here is what I have so far:

I have hits to a specific page, by week working fine here:
mysql> SELECT week(time_stamp, 1) as week, request_uri, count(*) as hits
-> FROM `access_log`
-> WHERE (request_uri="/testing/folder/foldersetup.exe") AND  
(time_stamp > 2004040100 AND time_stamp < 2004082300)
-> GROUP BY week
-> ORDER BY week ASC, hits DESC
-> LIMIT 20;
+--+-+---+
| week | request_uri | hits  |
+--+-+---+
|   27 | /testing/folder/folderSetup.exe | 20069 |
|   28 | /testing/folder/folderSetup.exe |  3785 |
|   29 | /testing/folder/folderSetup.exe |  1607 |
|   30 | /testing/folder/folderSetup.exe |   878 |
|   31 | /testing/folder/folderSetup.exe |   858 |
|   32 | /testing/folder/folderSetup.exe |   779 |
|   33 | /testing/folder/folderSetup.exe |   721 |
|   34 | /testing/folder/folderSetup.exe |  1015 |
+--+-+---+
8 rows in set (0.85 sec)

but when I try to get top hits with a like statement by day (only 2  
days) I get:

mysql> SELECT day(time_stamp) as day, request_uri, count(DISTINCT  
request_uri) as hits
-> FROM `access_log`
-> WHERE (time_stamp > 2004050300 AND time_stamp <  
2004050500) AND (request_uri LIKE "/docs/Carb/%")
-> GROUP BY day
-> ORDER BY day DESC, hits DESC
-> LIMIT 20;
+--+---+--+
| day  | request_uri   | hits |
+--+---+--+
|4 | /docs/Carb/index.html | 7075 |
|3 | /docs/Carb/Conceptual/index.html  | 6979 |
+--+---+--+
2 rows in set (39.68 sec)

which seems to give me all the hits within /docs/Carb/ on the two days  
and then displays the last request_uri that came through. Maybe I  
cannot do this with one SQL statement or I just need more logic.  
Ideally what I would want would be something like:

+- 
+--- 
+--+
| day | request_uri  
  | hits |
+- 
+--- 
+--+
|  4  | /docs/Carb/index.html
  | 1063 |
|  4  | /docs/Carb/Conceptual/test_system/new_test_system.html   
  |  930 |
|  4  | /docs/Carb/Conceptual/test_system/toc.html   
  |  921 |
|  4  | /docs/Carb/Conceptual/test_system/   
  |  799 |
|  4  | /docs/Carb/Conceptual/Testing_book/toc.html  
  |  483 |
|  4  |  
/docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html |  
 479 |
|  4  | /docs/Carb/Conceptual/Testing_book/index.html
  |  433 |
|  4  |  
/docs/Carb/Conceptual/newQuickTour/qt_intro/chapter_1_section_1.html  |  
 425 |
|  4  | /docs/Carb/date.html 
  |  421 |
|  4  | /docs/Carb/Conceptual/newQuickTour/toc.html  
  |  375 |
|  3  | /docs/Carb/index.html
  | 1063 |
|  3  | /docs/Carb/Conceptual/test_system/new_test_system.html   
  |  930 |
|  3  | /docs/Carb/Conceptual/test_system/toc.html   
  |  921 |
|  3  | /docs/Carb/Conceptual/test_system/   
  |  799 |
|  3  | /docs/Carb/Conceptual/Testing_book/toc.html  
  |  483 |
|  3  |  
/docs/Carb/Conceptual/Testing_book/Intro/chapter_1_section_1.html |  
 479 |
|  3  | /docs/Carb/Conceptual/Testing_book/index.html
  |  433 |
|  3  | /docs/Carb/Conceptual/newQuickTour/chapter_1_section_1.html  
  |  425 |
|  3  | /docs/Carb/date.html 
  |  421 |
|  3  | /docs/Carb/Conceptual/newQuickTour/toc.html  
  |  375 |
+- 
+--- 
+--+

Any thoughts/help would be much appreciated!!!


Re: LIKE question

2004-08-25 Thread Michael Stassen
V. M. Brasseur wrote:
How about...
  SELECT foo FROM bar WHERE LCASE(this) = "that";
Although there are always other ways to do it, of course.
Cheers,
--V
Schalk Neethling wrote:
If I search a field for 'doone' and one of the fields contains 'Lorna 
Doone' what is the best comparator to use as LIKE is skipping this one 
and I imagine '=' will do the same?
Case is not the problem.  Unless the column or the match was declared 
BINARY, mysql is case insensitive.  The problem is that LIKE must match the 
whole string.  That is,

  WHERE string_col LIKE 'doone'
is equivalent to
  WHERE string_col = 'doone'
Either way, 'Doone', 'DOONE', 'doone', 'dOoNe', etc. would all match, but 
'Lorna Doone', 'Doonesbury', and '211 Andoone Street' would not.  If you 
want partial string matches, you need to either add wildcards to the LIKE 
comparison, or switch to RLIKE (REGEXP).

begins with doone contains doone  ends with doone
- --  ---
 LIKE   'doone%' '%doone%'   '%doone'
RLIKE   '^doone' 'doone' 'doone$'
See the manual for details 
 and 
.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


re: crosstabs and pivot tables

2004-08-25 Thread Donna Hinshaw
Shawn -
Maybe terminology here...but I think of a "crosstab" query as one which 
yields sums or averages or
some such tabulation.

What I need to do is just "pivot" from rows to columns.
The way I've come up with is this:
ORIGINAL TABLE:
columns  =  id   date   datetype
there are about 20 datetypes (eg. entered_date, modified_date, 
application_date)
there can be up to 20 rows for each ID value

DESIRED TABLE:
columns =   id   entered_date   modified_date   application_date 
so there is just one row for each ID value
SOLUTION:
create temporary tables for each of the date types
columns for TEMP1 = id  entered_date
columns for TEMP2 = id modified_date
columns for TEMP3 = id application_date
 so on for all date types
then, do an insert into the DESIRED TABLE for all columns,
as select (  ... join all 20 tables on the ID value )
Since there are over 23000 ID values, and over 20 date types,
this join on all 20 TEMP tables will be an overnight batch process,
which is fine for the users - this DESIRED TABLE is for reporting
somewhat after the fact, and one day "out-of-synch" is fine.
Any other suggestions?
Donna

[EMAIL PROTECTED] wrote:
I still see what you want as a crosstab query. The only difference, as you 
say very well, is that you want to pivot on the "date type" values and not 
the ID values. 

The only other thing you need to decide in order to make a crosstab report 
is "what information goes in the position for the row ID=x  and column 
datetype=y". Do you want to see the SUM of some value for each of the rows 
with that ID value and Date type? Or the average or maximum or minimum or 
the standard deviation for the set.  I am asking you how do you want to 
calculate each of the values of x, y, z, and w as you listed them in your 
example output?

It's the same pattern as the other crosstab queries (aka pivot tables) but 
you have to tell me which column you want to calculate values from and 
which calculation to use before I can give you an example using your data.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Donna Hinshaw <[EMAIL PROTECTED]> wrote on 08/20/2004 03:13:16 
PM:

 

Hi - I've read the threads about converting rows of data into columns,
but those threads assume the number of distinct rows is very limited
(say 7 for days of week).
Instead, I have a table like this:
iddate   date type
1 ...  a
1 ...  b

2   a
2    d
Where the number of distinct id values is in the 100,000 range
but the distinct date types are limited to about 20.
I want to get a table (not a view) like this:
iddate a   date b  date d   ..
1  x   y null
2  z  null   w
(based on the values in the first table above)
So...I can't think how to do this.  Help would be appreciated.
TIA.
dmh

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
   

 



Problems with Mysql database and PHPAdmin

2004-08-25 Thread Keith Brownmiller
Greetings All,

A seemingly strange problem.

If I go to the mysql prompt on a linux server - mysql version 3.23.14-alpha.
I can issue the command create database .
I can then issue the command use .

No problem.

However, if I type use mysql for the mysql database.
If I do a select host,db, user from db; the database that I just created is not there. 
 however, if I use a phpmysqladmin type program, I can see the database listed there 
with no problem.

Please advise.

Keith
Sys Admin


Re: LIKE question

2004-08-25 Thread V. M. Brasseur
How about...
  SELECT foo FROM bar WHERE LCASE(this) = "that";
Although there are always other ways to do it, of course.
Cheers,
--V
Schalk Neethling wrote:
If I search a field for 'doone' and one of the fields contains 'Lorna 
Doone' what is the best comparator to use as LIKE is skipping this one 
and I imagine '=' will do the same?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: LIKE question

2004-08-25 Thread Schalk Neethling
If I search a field for 'doone' and one of the fields contains 'Lorna 
Doone' what is the best comparator to use as LIKE is skipping this one 
and I imagine '=' will do the same?

--
Kind Regards
Schalk Neethling
Web Developer.Designer.Programmer.President
Volume4.Development.Multimedia.Branding
emotionalize.conceptualize.visualize.realize
Tel: +27125468436
Fax: +27125468436
email:[EMAIL PROTECTED]
web: www.volume4.co.za
This message contains information that is considered to be sensitive or confidential 
and may not be forwarded or disclosed to any other party without the permission of the 
sender. If you received this message in error, please notify me immediately so that I 
can correct and delete the original email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Prepared Statement questions/issues

2004-08-25 Thread Daniel Kasak
Ken Gieselman wrote:
Hi Folks --
I'm running into some issues with large prepared statements, and would love some
feedback about approaches or tweaks that I might have overlooked :)
 


Any suggestions as to how to do this more efficiently/faster?  Or how to
increase the buffer space for the prepared statements, so as to do the inserts
in larger groups?
 

I can't comment on prepared statements - I've never used them. I'm 
running a 4.0.x server here.
But I do know that one of the fastest ways to get data into MySQL is 
with 'load data infile'.
Maybe you export your data to a csv file and import it that way? Line up 
each 1000 inserts in a text file and then run 'load data infile' or 
something like that?

It's not the cleanest solution, granted, but if you need to to work 
better in a hurry, this might be your ticket ( at least until you figure 
out what's up with your prepared statements ).

--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au
-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

auotmate read queries for replication

2004-08-25 Thread Crouch, Luke H.
is there a simple way to automate read queries for replication above the application 
level? we have a huge amount of disparate systems with all their own queries, and we'd 
like to be able to set up the reads to go to our slaves, and the writes to go to our 
master without having to change every application's queries to different datasources 
by hand.
 
is there some kind of connection router than can filter the reads to one of the slaves 
and the writes to the master? if so, is the performance hit not worth it?
 
thanks,
-L

Luke Crouch 
918-461-5326 
[EMAIL PROTECTED] 

 


Re: Query Help

2004-08-25 Thread Rhino

- Original Message - 
From: "Ronan Lucio" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 5:12 PM
Subject: Query Help


> Hi,
>
> I sorry for I neither didn´t find the information that I need in
> the documentation nor found the correct words for a search
> in the mailing archivers.
>
> I have a query like this:
>
>SELECT cod, descr
>FROM table
>  WHERE AND cod != 7

The 'AND' in the previous line should not be there.

>  AND cod != 10
>  AND cod != 13
>  AND cod != 14
>  AND cod != 15
>  AND cod != 20
>  AND cod != 25
>  AND cod != 30
>  AND cod != 31
>ORDER BY descr
>
> Is there a SQL command to make this query cleaner?
>
> I find something like:
> WHERE cod NOT IN (7,10,13,14,15,20,25,30,31)
>
That's right; that is a much cleaner way to write the query.

Rhino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



INSERT IGNORE like feature for rows failing foreign key constraints?

2004-08-25 Thread John McCaskey
I have a logging table where I insert a large number of rows every 5
minutes.  For performance reasons this occurs in bulk inserts of about
5000 rows at a time.  (ie. INSERT INTO table VALUES(...), (...), (...))

 

One of the fields in the table is an id that connects it to another
table.  It is possible that by the time the insert occurs (they queue up
in memory briefly before I create the bulk insert) a separate process
has deleted the entry in the parent table and the id is invalid.

 

When this happens right now the entire insert of 5000 rows fails because
one single row is bad.  I want the behavior to be that the one fails
silently and the other 4999 insert successfully.

 

Any ideas how I can do this?  It seems like INSERT IGNORE would make
sense but that appears to only ignore duplicates not foreign key
failures.

 

John A. McCaskey

Software Development Engineer

IP Sciences, Inc.

[EMAIL PROTECTED]

206.902.2027

 



Query Help

2004-08-25 Thread Ronan Lucio
Hi,

I sorry for I neither didn´t find the information that I need in
the documentation nor found the correct words for a search
in the mailing archivers.

I have a query like this:

   SELECT cod, descr
   FROM table
 WHERE AND cod != 7
 AND cod != 10
 AND cod != 13
 AND cod != 14
 AND cod != 15
 AND cod != 20
 AND cod != 25
 AND cod != 30
 AND cod != 31
   ORDER BY descr

Is there a SQL command to make this query cleaner?

I find something like:
WHERE cod NOT IN (7,10,13,14,15,20,25,30,31)

Thanks,
Ronan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: One form multiple inserts

2004-08-25 Thread Stuart Felenstein
Too funny! I'm slowly coming to grips on the M2M. I
decided not to opt for it in this situation as I still
 believe it would not address my issues.

My problem, or a better to phrase it , my solution is
that I am not supplying titles.  They are, to the user
, blank fields , left to them to supply their title.
Now I may have missed your point.  Though it was 1-3
that held the same title twice. That much I grasped. 
Anyway, because there are literally a few thousand
titles I'd need to insert I opted not to at this point
unless or until I start seeing a trend of common ones.
 

Does this make any sense ?  Please tell me if I'm
wrong 

Also another quick question:
In your example tables I noticed you have an ID for
the member plus a Primary Key ID ?
Is this also common ? In particular with my tables for
members I have 1 primary key column which is also the
member ID .

Is that okay ?

Stuart
--- [EMAIL PROTECTED] wrote:


> CREATE TABLE member (
> MemberID int auto_increment
> , Login varchar(25) not null primary key
> , Name varchar(30) not null
> #   , (other fields as necessary)
> , KEY(ID)
> )
> 
> CREATE TABLE title (
> TitleID int auto_increment
> , TitleName varchar(25) not null primary key
> #   , (other fields as needed)
> , KEY (ID)
> )
> 
> CREATE TABLE member_title (
> ID int auto_increment
> , member_ID int not null
> , title_ID int not null
> , startdate datetime
> , enddate datetime
> , KEY(ID)
> )
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



what did people do before str_to_date ?

2004-08-25 Thread Vincent
Hello All,
Ran into a small problem. I was developing code under the 4.1.x series 
of mysql and but discovered that the production server is 3.23.x.  Some 
of my queries made use of the very handy str_to_date() function. Is the
only solution (besides upgrading production server) to preformat the 
date before sending it to mysql or was there a common method for parsing 
dates before the advent of str_to_date()?
Thanks,
Vinny

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: mysql-plain Digest 25 Aug 2004 19:36:13 -0000 Issue 3063

2004-08-25 Thread SGreen
The anonymous account IS a security risk and should be deleted as soon as 
you are capable of managing "real" logins. It's basically there to let 
newbie administrators get up to speed with as few problems as possible. 
Now that you are comfortable with the MySQL privileges, dump it and hope 
it never comes back... ;-)

http://dev.mysql.com/doc/mysql/en/Default_privileges.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Rusty Wright <[EMAIL PROTECTED]> wrote on 08/25/2004 04:35:07 
PM:

>Date: Wed, 25 Aug 2004 19:30:04 +0300
>To: [EMAIL PROTECTED]
>From: Egor Egorov <[EMAIL PROTECTED]>
>Subject: Re: anonymous localhost user in mysql user grant table
> 
>"rusty" <[EMAIL PROTECTED]> wrote:
> 
>> What's the purpose of the entry in the user table where it has 
>> host='localhost' and user='' (blank) and all of the permissionsset 
to N?
> 
>You may want to look at 
> 
>http://dev.mysql.com/doc/mysql/en/Privilege_system.html
> 
>and 
> 
>http://dev.mysql.com/doc/mysql/en/User_Account_Management.html
> 
> I have read them and just now re-read them in case I overlooked
> anything.  The only place where I find anything specific about the
> anonymous user is section 5.5.8, "causes of access denied errors", but
> it only explains an effect from the existence of the anonymous user,
> not why the account was set up by the mysql_install_db program.
> 
> I didn't find anything related to my original question, "is this some
> sort of security safety net and it would be dangerous to delete it?
> Or is it an example of setting up an anonymous locahost user and it's
> safe to delete it?"
> 
> I'm trying to understand the purpose of the anonymous account; is it
> needed specifically in order to run a more secure mysql server, does
> it help prevent break-ins, etc.?
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: Could this be dangerous :Values and Labels

2004-08-25 Thread SGreen
An integer column can hold rather large numbers, why not go ahead and use 
the ones that are <100? They are as valid as any other number, aren't 
they?

I think you are confusing how things "look" with what they "are".  If you 
need really BIG numbers so that you can identify MANY rows of data use a 
"bigint unsigned" column (from 0 to 18446744073709551615) . If you only 
need to use the values from 0 to 255, use a "tinyint unsigned" column. 

Read - > http://dev.mysql.com/doc/mysql/en/Numeric_type_overview.html

You generally don't have to make your keys pretty, just make them work. 
You make them pretty when you show them to the user, the database doesn't 
care.

Regards,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Stuart Felenstein <[EMAIL PROTECTED]> wrote on 08/25/2004 04:41:28 PM:

> The other idea I had aside from mnemonic (that was
> yours)  was to use beefier numbers.  I started listing
> them from 1.   I am thinking of 3 digits maybe.
> 
> Stuart
> --- Rhino <[EMAIL PROTECTED]> wrote:
> 
> > 
> > - Original Message - 
> > From: "Stuart Felenstein" <[EMAIL PROTECTED]>
> > To: <[EMAIL PROTECTED]>
> > Sent: Wednesday, August 25, 2004 3:36 PM
> > Subject: Could this be dangerous :Values and Labels
> > 
> > 
> > > Or maybe just bad practice. Thought before I go
> > any
> > > further I'll ask.
> > >
> > > I have a few static tables that list out "items"
> > and
> > > the primary key is an "assigned" ID.  Meaning I
> > did
> > > not set auto-increment.  As I add items I will add
> > the
> > > associated ID number.
> > >
> > > Now to the transactional tables.  While the label
> > has
> > > the item listed in the form, it gets inserted into
> > the
> > > table with the ID number.  To illustrate:
> > >
> > > Static_Table
> > > ID  Value
> > > 1   United
> > > 2   Jet Blue
> > > 3   Southwest
> > > 4   American
> > >
> > > Dynamic_Table
> > > MemberIDAirline_Pref
> > > 200   1
> > > 201   4
> > > 202   3
> > > 203   4
> > > 204   1
> > >
> > > Pros cons dangers advantages comments ?
> > >
> > I *think* you're asking if it is okay to use codes
> > instead of real values in
> > databases, such as '1' for 'United'. That is more
> > than okay, it is very
> > widely done.
> > 
> > The pros are pretty obvious:
> > - codes are usually much shorter than the real
> > values, which results in
> > space savings for data storage
> > 
> > The cons are a little less obvious:
> > - unless you memorize the codes (which you *will*
> > tend to do over time, as
> > long as there aren't too many of them), you will
> > have to do lookups to
> > determine what real value corresponds to a given
> > code. That often translates
> > into extra joins in your programs and queries. This
> > should not be a big deal
> > though since joins usually perform pretty well in
> > most cases.
> > 
> > I'd like to offer one comment. If I were making up
> > the codes, I'd try to
> > choose codes that were mnemonic, such as 'U' for
> > 'United' and 'A' for
> > 'American'. (Or maybe 'UA' and 'AA' since those
> > abbreviations might be
> > self-explanatory to many users of your system.)
> > 
> > Lastly, with regards to Auto-Increment, you should
> > not feel that all keys
> > should be Auto-Incremented. I think of
> > Auto-Increment as a convenience for
> > generating key values when you don't have any strong
> > preference for a given
> > key having a given value. However, I don't think you
> > should ALWAYS use
> > Auto-Incremented keys. For example, your airline
> > lookup table is more
> > meaningful if you use short letter codes for your
> > airlines than if you use
> > integers.
> > 
> > Rhino
> > 
> > 
> > -- 
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe: 
> >
> http://lists.mysql.com/[EMAIL PROTECTED]
> > 
> > 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: Could this be dangerous :Values and Labels

2004-08-25 Thread Rhino

- Original Message - 
From: "Stuart Felenstein" <[EMAIL PROTECTED]>
To: "Rhino" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 4:41 PM
Subject: Re: Could this be dangerous :Values and Labels


> The other idea I had aside from mnemonic (that was
> yours)  was to use beefier numbers.  I started listing
> them from 1.   I am thinking of 3 digits maybe.
>
I'm really not sure what "beefier" numbers buy you but I don't suppose a 3
digit number is appreciably worse than a 1 digit number. However, I think
mnemonics are better than any kind of number since they are much easier
memorized. Then, your users won't need you to do lookups to tell them what
airline '1' (or '123') is, assuming you've chosen good mnemonics.

Rhino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Can append word into varchar column in Update statement?

2004-08-25 Thread SGreen
Yes, UPDATE ... SET... is exactly what you would use. Just set the column 
to be the combined value of the two values.
Please read: http://dev.mysql.com/doc/mysql/en/UPDATE.html

UPDATE tablename
SET fieldname = 'new list of values that you wanted'
WHERE _where_conditions_

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Monet <[EMAIL PROTECTED]> wrote on 08/24/2004 07:49:37 PM:

> Hi all,
> 
> I though it is impossible to do that but I like check
> with you guys in case it is just because I never heard
> it.
> I have a table with a text column. Is there any way I
> can append some word into this field when I update the
> table?
> For instance,
> Table temp, column ReviewComments, data type of
> ReviewComments is varchar.
> Value in  âReviewCommentsâ is âPSRCâ. 
> After update, value in âReviewCommentsâ should be
> âPSRC, WHCâ
> 
> Can I do that by using UPDATE â SET â. 
> 
> Thanks a lot.
> 
> Monet
> 
> 
> 
> 
> ___
> Do you Yahoo!?
> Win 1 of 4,000 free domain names from Yahoo! Enter now.
> http://promotions.yahoo.com/goldrush
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 



Re: One form multiple inserts

2004-08-25 Thread SGreen
Stuart,

You need to begin to divorce these two processes from each other: STORAGE 
and INPUT/OUTPUT.  What your user's see on a web page doesn't need to look 
anything at all like your database structure. A user interface (UI) is 
designed for ease of use. The other is designed for query and storage 
efficiency and there are frequent occasions where the two structures are 
radically different.

You need to store information on two entities (sometimes also referred to 
as objects) that are related in a many-to-many relationship: any ONE 
"member" is related to several (or zero) "titles" and any "title" can be 
used by several (or zero) "members".  To represent this in the database 
you typically need 3 tables, one for the "member"-specific information, 
one for the "title"-specific information, and one carry information about 
each "member"-"title" association. 

CREATE TABLE member (
MemberID int auto_increment
, Login varchar(25) not null primary key
, Name varchar(30) not null
#   , (other fields as necessary)
, KEY(ID)
)

CREATE TABLE title (
TitleID int auto_increment
, TitleName varchar(25) not null primary key
#   , (other fields as needed)
, KEY (ID)
)

CREATE TABLE member_title (
ID int auto_increment
, member_ID int not null
, title_ID int not null
, startdate datetime
, enddate datetime
, KEY(ID)
)

So let's say you have only 2 members: 

INSERT member (Login, Name) VALUES ('jblow', 'Joseph Blow'), ('msmith', 
'Mary Smith');

Between them they have held the titles of "Flunky", "Gopher", "Assistant", 
"Intern", "Scumbag", and "Vice-President" (as we are all aware, frequently 
a few total idiots do make it into senior management). So let's add those 
titles to our "list of titles"

INSERT title (Titlename) VALUES ('Flunky), 
('Gopher'),('Assistant'),('Intern'),('Brownoser'),('Vice-President');

Here is where the rubber meets the road. Who held which position and 
between which dates?

INSERT member_title (member_ID, title_ID, startdate, enddate) 
VALUES (1,1,'2004-02-01', '2004-03-01')
, (1,2,'2004-03-01', '2004-03-15')
, (1,3,'2004-03-15', '2004-04-01')
, (1,6,'2004-04-01', '2004-04-02')
, (1,3,'2004-04-02', null)
, (2,5,'2003-11-15', '2004-01-01')
, (2,1,'2004-01-01', '2004-03-01')
, (2,3,'2004-02-01', '2004-08-10')
, (2,4,'2004-08-10', null);

Can you see how this table contains a running history for each "member" 
and what "title" they held and when they held it? I left the enddates 
empty (null) because they haven't been moved from those positions. There 
is one case where one of your "member"s held two "titles" at the same 
time. Can you spot it?

If you would like more details, just ask. This was only a summary of how 
to store information in a many-to-many structure. I would have given you 
more examples but I am a little busy today and couldn't take all of the 
time I wanted to spend on this topic (sorry!)

Yours,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Stuart Felenstein <[EMAIL PROTECTED]> wrote on 08/24/2004 06:21:58 PM:

> Trying to figure out how this gets done. Let me
> explain first. By the way I hope this is not off topic
> as perhaps it is more towards application then core db
> .
> 
> I have a table:
> 
> Count[int][auto-increment], MemberID[int],
> Title[varchar], TitleYear[int]
> 
> In my organization members can have or have held
> various titles.  I'm trying to collect, right now, 5
> titles and the years they've held those titles.
> 
> Extraneous - Now I'm in a precarious place, cause I am
> new to web dev and db and have had the fortunate
> experience to be using an extension to dreamweaver
> that makes it relatively simple.  Until you have to go
> beyond what they provide in functionality /
> capability. It's php / adodb.
> 
> So I"ve looked around and found a few "methods" but no
> detail, nor have I really drilled down on how they get
> implemented. 
> 
> One method was basically a loop of sorts.  Not
> entirely sure but I suppose pages can keep refreshing
> 5 times.
> Second, was to pass all the variables over to another
> page.  It sounded like maybe this secondary page is
> hidden but set up to accept an insert.  The third,
> which I tried, unsuccessfully, was through the use of
> after triggers.  Possible I need to work on this more.
> 
> Anyway I wouldn't mind hearing how other people deal
> with this issue.  Not sure if I could set up some SQL
> statements.
> 
> Thank you ,
> Stuart
> 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: Could this be dangerous :Values and Labels

2004-08-25 Thread Stuart Felenstein
The other idea I had aside from mnemonic (that was
yours)  was to use beefier numbers.  I started listing
them from 1.   I am thinking of 3 digits maybe.

Stuart
--- Rhino <[EMAIL PROTECTED]> wrote:

> 
> - Original Message - 
> From: "Stuart Felenstein" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Wednesday, August 25, 2004 3:36 PM
> Subject: Could this be dangerous :Values and Labels
> 
> 
> > Or maybe just bad practice. Thought before I go
> any
> > further I'll ask.
> >
> > I have a few static tables that list out "items"
> and
> > the primary key is an "assigned" ID.  Meaning I
> did
> > not set auto-increment.  As I add items I will add
> the
> > associated ID number.
> >
> > Now to the transactional tables.  While the label
> has
> > the item listed in the form, it gets inserted into
> the
> > table with the ID number.  To illustrate:
> >
> > Static_Table
> > ID  Value
> > 1   United
> > 2   Jet Blue
> > 3   Southwest
> > 4   American
> >
> > Dynamic_Table
> > MemberIDAirline_Pref
> > 200   1
> > 201   4
> > 202   3
> > 203   4
> > 204   1
> >
> > Pros cons dangers advantages comments ?
> >
> I *think* you're asking if it is okay to use codes
> instead of real values in
> databases, such as '1' for 'United'. That is more
> than okay, it is very
> widely done.
> 
> The pros are pretty obvious:
> - codes are usually much shorter than the real
> values, which results in
> space savings for data storage
> 
> The cons are a little less obvious:
> - unless you memorize the codes (which you *will*
> tend to do over time, as
> long as there aren't too many of them), you will
> have to do lookups to
> determine what real value corresponds to a given
> code. That often translates
> into extra joins in your programs and queries. This
> should not be a big deal
> though since joins usually perform pretty well in
> most cases.
> 
> I'd like to offer one comment. If I were making up
> the codes, I'd try to
> choose codes that were mnemonic, such as 'U' for
> 'United' and 'A' for
> 'American'. (Or maybe 'UA' and 'AA' since those
> abbreviations might be
> self-explanatory to many users of your system.)
> 
> Lastly, with regards to Auto-Increment, you should
> not feel that all keys
> should be Auto-Incremented. I think of
> Auto-Increment as a convenience for
> generating key values when you don't have any strong
> preference for a given
> key having a given value. However, I don't think you
> should ALWAYS use
> Auto-Incremented keys. For example, your airline
> lookup table is more
> meaningful if you use short letter codes for your
> airlines than if you use
> integers.
> 
> Rhino
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Could this be dangerous :Values and Labels

2004-08-25 Thread Stuart Felenstein
I like the mnemonic idea.  I wish it was as simple as
airlines though.  [Orbitz if your out there listening,
don't worry I'm not coming after you!]
Well to ponder.

Yes, regarding AI keys , I have them for transactions,
new regs, etc.  For longer static's we'll see. The
rest I can type in.

Thanks all
Stuart
--- Rhino <[EMAIL PROTECTED]> wrote:

> 
> - Original Message - 
> From: "Stuart Felenstein" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Wednesday, August 25, 2004 3:36 PM
> Subject: Could this be dangerous :Values and Labels
> 
> 
> > Or maybe just bad practice. Thought before I go
> any
> > further I'll ask.
> >
> > I have a few static tables that list out "items"
> and
> > the primary key is an "assigned" ID.  Meaning I
> did
> > not set auto-increment.  As I add items I will add
> the
> > associated ID number.
> >
> > Now to the transactional tables.  While the label
> has
> > the item listed in the form, it gets inserted into
> the
> > table with the ID number.  To illustrate:
> >
> > Static_Table
> > ID  Value
> > 1   United
> > 2   Jet Blue
> > 3   Southwest
> > 4   American
> >
> > Dynamic_Table
> > MemberIDAirline_Pref
> > 200   1
> > 201   4
> > 202   3
> > 203   4
> > 204   1
> >
> > Pros cons dangers advantages comments ?
> >
> I *think* you're asking if it is okay to use codes
> instead of real values in
> databases, such as '1' for 'United'. That is more
> than okay, it is very
> widely done.
> 
> The pros are pretty obvious:
> - codes are usually much shorter than the real
> values, which results in
> space savings for data storage
> 
> The cons are a little less obvious:
> - unless you memorize the codes (which you *will*
> tend to do over time, as
> long as there aren't too many of them), you will
> have to do lookups to
> determine what real value corresponds to a given
> code. That often translates
> into extra joins in your programs and queries. This
> should not be a big deal
> though since joins usually perform pretty well in
> most cases.
> 
> I'd like to offer one comment. If I were making up
> the codes, I'd try to
> choose codes that were mnemonic, such as 'U' for
> 'United' and 'A' for
> 'American'. (Or maybe 'UA' and 'AA' since those
> abbreviations might be
> self-explanatory to many users of your system.)
> 
> Lastly, with regards to Auto-Increment, you should
> not feel that all keys
> should be Auto-Incremented. I think of
> Auto-Increment as a convenience for
> generating key values when you don't have any strong
> preference for a given
> key having a given value. However, I don't think you
> should ALWAYS use
> Auto-Incremented keys. For example, your airline
> lookup table is more
> meaningful if you use short letter codes for your
> airlines than if you use
> integers.
> 
> Rhino
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql-plain Digest 25 Aug 2004 19:36:13 -0000 Issue 3063

2004-08-25 Thread Rusty Wright
   Date: Wed, 25 Aug 2004 19:30:04 +0300
   To: [EMAIL PROTECTED]
   From: Egor Egorov <[EMAIL PROTECTED]>
   Subject: Re: anonymous localhost user in mysql user grant table

   "rusty" <[EMAIL PROTECTED]> wrote:

   > What's the purpose of the entry in the user table where it has 
   > host='localhost' and user='' (blank) and all of the permissions set to N?

   You may want to look at 

   http://dev.mysql.com/doc/mysql/en/Privilege_system.html

   and 

   http://dev.mysql.com/doc/mysql/en/User_Account_Management.html

I have read them and just now re-read them in case I overlooked
anything.  The only place where I find anything specific about the
anonymous user is section 5.5.8, "causes of access denied errors", but
it only explains an effect from the existence of the anonymous user,
not why the account was set up by the mysql_install_db program.

I didn't find anything related to my original question, "is this some
sort of security safety net and it would be dangerous to delete it?
Or is it an example of setting up an anonymous locahost user and it's
safe to delete it?"

I'm trying to understand the purpose of the anonymous account; is it
needed specifically in order to run a more secure mysql server, does
it help prevent break-ins, etc.?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Could this be dangerous :Values and Labels

2004-08-25 Thread Rhino

- Original Message - 
From: "Stuart Felenstein" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 25, 2004 3:36 PM
Subject: Could this be dangerous :Values and Labels


> Or maybe just bad practice. Thought before I go any
> further I'll ask.
>
> I have a few static tables that list out "items" and
> the primary key is an "assigned" ID.  Meaning I did
> not set auto-increment.  As I add items I will add the
> associated ID number.
>
> Now to the transactional tables.  While the label has
> the item listed in the form, it gets inserted into the
> table with the ID number.  To illustrate:
>
> Static_Table
> ID  Value
> 1   United
> 2   Jet Blue
> 3   Southwest
> 4   American
>
> Dynamic_Table
> MemberIDAirline_Pref
> 200   1
> 201   4
> 202   3
> 203   4
> 204   1
>
> Pros cons dangers advantages comments ?
>
I *think* you're asking if it is okay to use codes instead of real values in
databases, such as '1' for 'United'. That is more than okay, it is very
widely done.

The pros are pretty obvious:
- codes are usually much shorter than the real values, which results in
space savings for data storage

The cons are a little less obvious:
- unless you memorize the codes (which you *will* tend to do over time, as
long as there aren't too many of them), you will have to do lookups to
determine what real value corresponds to a given code. That often translates
into extra joins in your programs and queries. This should not be a big deal
though since joins usually perform pretty well in most cases.

I'd like to offer one comment. If I were making up the codes, I'd try to
choose codes that were mnemonic, such as 'U' for 'United' and 'A' for
'American'. (Or maybe 'UA' and 'AA' since those abbreviations might be
self-explanatory to many users of your system.)

Lastly, with regards to Auto-Increment, you should not feel that all keys
should be Auto-Incremented. I think of Auto-Increment as a convenience for
generating key values when you don't have any strong preference for a given
key having a given value. However, I don't think you should ALWAYS use
Auto-Incremented keys. For example, your airline lookup table is more
meaningful if you use short letter codes for your airlines than if you use
integers.

Rhino


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Add new database into existing MYSQL database

2004-08-25 Thread SGreen
If you do not use the GRANT, REVOKE, and DROP USER statements to create 
and destroy user accounts, you must manually FLUSH PRIVILEGES.  It's all 
documented in the manual:

http://dev.mysql.com/doc/mysql/en/GRANT.html

and in even more detail here:

http://dev.mysql.com/doc/mysql/en/User_Account_Management.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

"Yong Wang" <[EMAIL PROTECTED]> wrote on 08/24/2004 03:15:39 PM:

> Hi, all:
>  I use root login to create a new databse in the existing server.
> Then I use the existing user name in the mysql user table as user name,
> add the database and username .. info into mysql db table. Then
> mysqladmin to load the table. When I login the database using the
> existing user name, I
> only can see previously existing database without seeing the new added
> database. What is the problem ?
> Thanks a lot.
> 
>  Yong
> 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: calculating ratio of two datetime columns

2004-08-25 Thread SGreen
MySQL has several functions for dealing with date and time values, many of 
which exist only as of 4.1.x. Which ones are available to you depend on 
your version. Please read: 
http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html

Actually, I believe that the formula you wrote is almost the statement you 
will need. What I think you are trying to do is to compute the % ELAPSED 
time since the start of the project as compared to the SCHEDULED time.

That formula would look similar to (but not exactly like):

  (End_Date_Actual - Start_Date_Actual)/(End_Date_Scheduled - 
Start_Date_Scheduled) * 100

You want the actual elapsed time (expressed in convenient units, you 
suggested seconds) divided by the scheduled elapsed time (expressed in the 
same units) then multiplied by 100 to give you a percentage value. Using 
only the functions available pre-4.1 that could look like (forgive the 
wrapping):

(Cast(UNIX_TIMESTAMP(End_Date_Actual) as SIGNED) - 
CAST(UNIX_TIMESTAMP(Start_Date_Actual) as 
SIGNED))/(CAST(UNIX_TIMESTAMP(End_Date_Scheduled) AS SIGNED) - 
CAST(UNIX_TIMESTAMP(Start_Date_Scheduled) AS SIGNED) * 100

The other option is to get the 4 starting and ending date values and do 
the arithmetic in your programming language, instead. Go with whichever is 
easiest and works best for you.

HTH,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

"Viswanatha Rao" <[EMAIL PROTECTED]> wrote on 08/24/2004 03:06:13 PM:

> I am selecting two columns each of type DateTime from a mysql table from
> perl.
> 
> my $sth = $dbh->prepare("SELECT End_Date_Scheduled, End_Date_Actual FROM
> Table1 WHERE ID='10'" );
> $sth->execute();
> while (my $ref = $sth->fetchrow_hashref()) {
> print"Found: end date scheduled = $ref->{'End_Date_Scheduled'}, end
> date actual = $ref->{'End_Date_Actual'}\n";
> }
> $sth->finish();
> 
> It works.
> 
> However, I need to find the % ratio = ((end_date_scheduled -
> end_date_actual)/end_date_scheduled) * 100
> 
> To do this I need the DateTime values in seconds. Can someone help me
> with this?
> 
> 
> 
> Best Regards
> Vishwa Rao
> 
> 
> 


RE: autoincrement question

2004-08-25 Thread Victor Pendleton
You will have to call the last_insert_id() function in order to obtain the
autoincrement value. 

-Original Message-
From: dan orlic
To: [EMAIL PROTECTED]
Sent: 8/25/04 2:26 PM
Subject: autoincrement question

question:

   I have an insert statement that has a null for the value of the 
primary key, id, since that value is also a
auto-increment field.  The problem i am running into is I need that Id 
further down the road for map tables, but I have no idea
what the id is, since it is in fact, autoincremented when inserted into 
the DB.

My question is this... is there anything I can add to my insert 
statement that can have mysql return the id value? or am I just nuts?

thank you for your help

dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Could this be dangerous :Values and Labels

2004-08-25 Thread Stuart Felenstein
Or maybe just bad practice. Thought before I go any
further I'll ask.  

I have a few static tables that list out "items" and
the primary key is an "assigned" ID.  Meaning I did
not set auto-increment.  As I add items I will add the
associated ID number.

Now to the transactional tables.  While the label has
the item listed in the form, it gets inserted into the
table with the ID number.  To illustrate:

Static_Table
ID  Value
1   United
2   Jet Blue
3   Southwest
4   American

Dynamic_Table
MemberIDAirline_Pref
200   1
201   4
202   3
203   4
204   1

Pros cons dangers advantages comments ?

Thank you
Stuart

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



autoincrement question

2004-08-25 Thread dan orlic
question:
  I have an insert statement that has a null for the value of the 
primary key, id, since that value is also a
auto-increment field.  The problem i am running into is I need that Id 
further down the road for map tables, but I have no idea
what the id is, since it is in fact, autoincremented when inserted into 
the DB.

My question is this... is there anything I can add to my insert 
statement that can have mysql return the id value? or am I just nuts?

thank you for your help
dan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


how to hide certain columns

2004-08-25 Thread Mauricio Pellegrini
Hi,

I'm using a few variables to do some calculations in the select list
of a query  but I don't like those columns ( the ones performing the
calc's ) to be shown in the result. for example:

Set @var1=0;
Set @var2=1;

Select  Col1,   as 'C1'
Col2as 'C2'
@var1:[EMAIL PROTECTED]   as 'C3'
@var2:[EMAIL PROTECTED]@var1
from test

I would like to only see columns C1 , C2 and C3
But I need @var2:[EMAIL PROTECTED]@var1 because it is incremented for the next row

Is there a way to hide @var2:[EMAIL PROTECTED]@var1 but still do the calculation
it implies?

The only solution I found is by using a temporary table this way 

Set @var1=0;
Set @var2=1;

Create temporary table tmp_test as 
Select  Col1,   as 'C1'
Col2as 'C2'
@var1:[EMAIL PROTECTED]   as 'C3'
@var2:[EMAIL PROTECTED]@var1
from test;

Select C1, C2 ,C3 from tmp_test


Is there another way ? 


Thanks a million
Mauricio




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 5.0.1 won't restart

2004-08-25 Thread Scott Hamm
I'm running Windows 2000 and MySQL 5.0.1. When I tried to restart the
service, it won't start up again, error message saying to try again in 30
minutes. After waiting for 30 minutes, it still bring up same error message.


What might be the cause?

Scott

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Secure Connection(e.g. SSL) Question

2004-08-25 Thread Eamon Daly
FYI, DBD::MySQL does support SSL. See

http://search.cpan.org/~rudy/DBD-mysql-2.9004/lib/DBD/mysql.pm

and search for mysql_ssl. You'll need to provide

mysql_ssl_client_key
mysql_ssl_client_cert
mysql_ssl_ca_file

as part of the DSN on the perl side, and set ssl-ca,
ssl-key, and ssl-cert on the server.

There's more on the subject at mysql.com:

http://dev.mysql.com/doc/mysql/en/Secure_requirements.html


Eamon Daly



- Original Message - 
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, August 24, 2004 11:12 AM
Subject: RE: MySQL Secure Connection(e.g. SSL) Question


> > I need to connect to a remote MySQL database from a PC using
> > SSL. I would
> > prefer to connect using perl DBD. Does anyone have a
> > suggestion how I can
> > accomplish this task or an alternative solution?
>
> What about stunnel or ssh tunnels and then use DBD::mysql on top.
>
> ISTR that DBD::mysql cannot use SSL'ed mysql client connections and your
> remote server would have to be complied to support it.
>
> Greg
>
> >
> > Thank You
> >
> >
> > -- 
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> > http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I find out a table's constraints ?

2004-08-25 Thread Martijn Tonies



> What command should I use to find out a table's constraints?

If you're talking about Foreign Key constraints, you
can have a look at them via the
SHOW CREATE TABLE 
statement.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do I find out a table's constraints ?

2004-08-25 Thread Nestor Florez
What command should I use to find out a table's constraints?

Thanks,
:-)

Néstor Alberto Flórez Torres



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: frecvent table corruption

2004-08-25 Thread adam

-Original Message-
From: Egor Egorov [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 25, 2004 12:40 PM
To: [EMAIL PROTECTED]
Subject: Re: frecvent table corruption

"adam" <[EMAIL PROTECTED]> wrote:

> The actual error message we get is: 
> 
> " SELECT query FROM namedqueries WHERE userid = 4 AND name = '(Default
> query)': Got error 127 from table handler at globals.pl line 276. "

[EMAIL PROTECTED] egor]$ perror  127
Error code 127:  Unknown error 127
127 = Record-file is crashed

> After repairing the table by using the mysql Control Center we lose lots
of
> data, even 60% of it. 
> 
> 
> 
> The mysql version is 4.0.18 installed on Red Hat.

Abnormal situation. What MySQL version is installed on Red Hat? The Red Hat
build or MySQL build or your own build? 

It is the binary mysql build , from the following archive: 
mysql-standard-4.0.18-pc-linux-i686.tar.gz


I have actually looked at some of the possible causes and eliminated the
following:
-the disk space is sufficient
-the mysql server never crased, it has been running for oover
100 days.
 
Also, I am using MyIsam table with the skip-locking option, because we only
have one mysld instance. 
 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Secure Connection(e.g. SSL) Question

2004-08-25 Thread Joshua J. Kugler
Something else to check out is Stunnel.  It creates SSL tunnels between hosts 
without requiring logins (basically port redirection).  Also, MySQL has built 
in SSL now, so you might want to look at that.

j- k-

On Wednesday 25 August 2004 08:10 am, [EMAIL PROTECTED] said something like:
> Something else I have done in the past was to use Cygwin to create an SSH
> session with the remote computer and use the remote computer's MySQL
> client/tools.
>
> If you are used to working in a "Terminal Server" session (I think they
> now call it "Remote Desktop Connectivity") it will feel very familiar.  I
> also fiddled around with Cygwin long enough and hard enough (translate: I
> spent lots of time reading groups and docs and experimenting) to enable me
> to create local X windows so that I could run a full GUI shell (GNOME or
> KDE, I can't remember) remotely. Please don't ask me how I got it all set
> up as I only did it once and it was a long while ago. I know it's possible
> because I did it and it worked well for my situation.
>
> FWIW,
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
>
> "Paul Maine" <[EMAIL PROTECTED]> wrote on 08/24/2004 12:03:19 PM:
> > I need to connect to a remote MySQL database from a PC using SSL. I
>
> would
>
> > prefer to connect using perl DBD. Does anyone have a suggestion how I
>
> can
>
> > accomplish this task or an alternative solution?
> >
> > Thank You
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and under 
the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ANN: MyCon 2004.1.1 Released

2004-08-25 Thread SciBit MySQL Team

SciBit is happy and proud to announce the 2004.1.1 release of MyCon.  

If you already have a registered copy, upgrading your current version is free and you 
can download the new MyCon Pro version (or for trial purposes):
http://download.scibit.com/software/mycon/mcp.exe
FreeMyCon is completely free for all to use:
http://download.scibit.com/software/mycon/mcf.exe

What's new summary:
* More than a hundred user suggestions and improvements have been implemented

* Pertaining to navigation, we had to reconcile such diverse comments as "I LOVE 
MyCon" 
with "I want my simpler Mascon folders back".  Also to accommodate users with a 
limited 
screen size (laptops etc) we have added favorite creation functionality which allows 
you to 
create favorite shortcuts to ANY folder or subfolder in your folder view.  You can 
also organize, 
reorder and arrange these shortcuts to your heart's content. For example, you can now 
group 
tables, queries, reports, etc together even though they may all reside on different 
servers or in 
different databases.  You can also use the favorite view without loss of any 
functionality when 
compared to the folders view.  In short, we have enabled you to create virtual folder 
structures 
containing subsets of any nodes in your default folder view.

* Lots of new functionality were built into the Connection Properties pane, including 
server 
and client versions, ping times, setup/customization of databases to show, compression 
and 
timeout settings have also been added. It now also contains a configuration tab where 
you can 
view all your MySQL variables (global & session, including support for the SET 
command), 
MySQL status, - processes (including support for the KILL command), - table types, - 
privileges 
and - logs (BDB)

* Table/Query Grid Views now include a text button which you can use to fetch MySQL 
data 
as it is returned by MySQL and edit it as if it is all text. Thus, when enabled, MyCon 
does not 
convert any values to integers, date/time, enums, sets, etc with the numerous editors 
available 
for editing these datatypes.

* Grid View now also does server-side sorting by default when you sort one or multiple 
columns 
by clicking the column names.  Client-side sorting can still be done on one or 
multiple columns 
utilizing the "group by this column" pane.

For a full description and discussion of what is new, please see:
http://forum.scibit.com/viewtopic.php?t=147

To display the new favorite's functionality, we have added a new online flash tutorial 
to 
the MyCon home:
http://www.scibit.com/products/mycon
Tutorial: http://www.scibit.com/products/mycon/MyCon.htm

Detailed Online Help is available at:
http://help.scibit.com/mycon

For any questions, suggestions or bug reports, please don't hesitate to contact us at:
[EMAIL PROTECTED]

For general discussions relating to MyCon, please use our forum at:
http://forum.scibit.com

If you use a RSS reader, you can also add our RSS news feed to get announcements:
http://www.scibit.com/scibit.rss

Best wishes, 
SciBit's MySQL Team
http://www.scibit.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: frecvent table corruption

2004-08-25 Thread Egor Egorov
"adam" <[EMAIL PROTECTED]> wrote:

> The actual error message we get is: 
> 
> " SELECT query FROM namedqueries WHERE userid = 4 AND name = '(Default
> query)': Got error 127 from table handler at globals.pl line 276. "

[EMAIL PROTECTED] egor]$ perror  127
Error code 127:  Unknown error 127
127 = Record-file is crashed

> After repairing the table by using the mysql Control Center we lose lots of
> data, even 60% of it. 
> 
> 
> 
> The mysql version is 4.0.18 installed on Red Hat.

Abnormal situation. What MySQL version is installed on Red Hat? The Red Hat
build or MySQL build or your own build? 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication eats 99% CPU

2004-08-25 Thread Egor Egorov
Please download MySQL official binaries and install. Then check if
the problem disappears. 





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: anonymous localhost user in mysql user grant table

2004-08-25 Thread Egor Egorov
"rusty" <[EMAIL PROTECTED]> wrote:

> What's the purpose of the entry in the user table where it has 
> host='localhost' and user='' (blank) and all of the permissions set to N?

You may want to look at 

http://dev.mysql.com/doc/mysql/en/Privilege_system.html

and 

http://dev.mysql.com/doc/mysql/en/User_Account_Management.html





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to Reset a field

2004-08-25 Thread Egor Egorov
"Yusdaniel Rodriguez Espinosa" <[EMAIL PROTECTED]> wrote:

> I have a db with table "usuarios" I erased all data in this table but "Id"
> is a autoinc and he have the value of the last value.
> 
> How I can Reset the field "ID"

See http://dev.mysql.com/doc/mysql/en/ALTER_TABLE.html at the very end.





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL query designer for *nix?

2004-08-25 Thread Egor Egorov
"Jens Bierkandt" <[EMAIL PROTECTED]> wrote:

> you might want to try this:
> DBDesigner from FabForce.

This seems to be what I'm looking for. Thank you!





-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Secure Connection(e.g. SSL) Question

2004-08-25 Thread SGreen
Something else I have done in the past was to use Cygwin to create an SSH 
session with the remote computer and use the remote computer's MySQL 
client/tools.

If you are used to working in a "Terminal Server" session (I think they 
now call it "Remote Desktop Connectivity") it will feel very familiar.  I 
also fiddled around with Cygwin long enough and hard enough (translate: I 
spent lots of time reading groups and docs and experimenting) to enable me 
to create local X windows so that I could run a full GUI shell (GNOME or 
KDE, I can't remember) remotely. Please don't ask me how I got it all set 
up as I only did it once and it was a long while ago. I know it's possible 
because I did it and it worked well for my situation.

FWIW,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



"Paul Maine" <[EMAIL PROTECTED]> wrote on 08/24/2004 12:03:19 PM:

> I need to connect to a remote MySQL database from a PC using SSL. I 
would
> prefer to connect using perl DBD. Does anyone have a suggestion how I 
can
> accomplish this task or an alternative solution?
> 
> Thank You
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Re: Create procedure problem

2004-08-25 Thread Martijn Tonies
Hi Joe,

> I want to create a stored procedure that I pass a date variable and two
> database names.  I can not seem to figure the correct syntax to utilize
> these variables in the procedure.  I think I might need to build a
> sqlstring and use an exec command to execute the string.  Here is my
> procedure code:
>
> # requires a table of paths called paths;
>
> create procedure calclmp(in startdate date, in db1 varchar(32), in db2
> varchar(32) );
> begin
> drop table if exists DB1.lmp_daily, DB1.t6, DB1.tmp_month_hr;
> create table DB1.lmp_daily like DB2.pjm_lmp_daily;

Unless you have a database named "DB1", I think the above will
fail - as far as I know, you cannot use variables/parameters like
that.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
>Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:

mysql> EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST 
('black');
+--+--+---+-+-+--+--+-+
| table| type | possible_keys | key | key_len | ref  | rows | 
Extra   |
+--+--+---+-+-+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0 |  |1 | 
Using where |
+--+--+---+-+-+--+--+-+
1 row in set (0.00 sec)




[EMAIL PROTECTED] wrote:
>>Have you checked the "Optimization" section of the manual yet?
>>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
> 
> 
> Oh yes, as I've attempted to configure the my.cnf file for best performance.  The
> query is correct.  The fulltext index is correct as I built the fulltext index on the
> single column (took 9 minutes) and even did "repair" and "optimize" on the table... 
> so I
> don't think its the index.  I'm thinking its the server config...
> 
> - John
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> 
>>I'm running into a problem with some queries running on a dedicated mysql server
> (2.0
>>GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST
> queries
>>are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 -
> 0.75
>>seconds) but it was only fast for a day or so.
>>Here's the rundown:
>>
>>TABLE:  fulltext_table (some_id, the_text) 
>>Rows: 3,237,981 
>>Type: MyISAM
>>Size: 920.8 MB 
>>
>>QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
> ('blue');
>>or 
>>QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH
> (the_text)
>>AGAINST ('blue') LIMIT 0, 20;
>>
>>Both are problematic.  I even tried placing a limit of 2 on the first query
> but
>>it didn't improve anything.  The table has a fulltext index on the column and is
>>optimized.  No other users are connected to the server.
>>
>>Is there a RED FLAG in here somewhere?
>>
>>MySQL configuration settings (using my-huge.cnf template):
>>key_buffer = 500M
>>max_allowed_packet = 1M
>>table_cache = 512
>>sort_buffer_size = 10M
>>read_buffer_size = 2M
>>myisam_sort_buffer_size = 64M
>>#thread_cache = 8
>>thread_concurrency = 8
>>#- Modifications --- #
>>ft_min_word_len = 3
>>set-variable = table_cache=1024
>>set-variable = max_heap_table_size=64M
>>set-variable = tmp_table_size=128M
>>set-variable = query_cache_limit=2M
>>query_cache_type=1
>>
>>
>>Performance Test:
>>SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
>>+--+
>>| COUNT(*) |
>>+--+
>>|95074 |
>>+--+
>>1 row in set (27.83 sec)
>>
>>Statistics for vmstat 1 (my apologies if this doesn't look pretty):
>>---
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>>0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
>> 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
>> 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
>> 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
>> 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
>> 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
>> 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
>> 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
>> 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
>> 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
>> 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
>> 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
>> 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
>> 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
>> 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>> 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
>> 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
>> 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
>> 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
>> 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
>> 0  1  19516  17556  4

Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Could you send the output of an EXPLAIN for your query?
--V
[EMAIL PROTECTED] wrote:
Have you checked the "Optimization" section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The query is correct.  The 
fulltext index is correct as I built the fulltext index on the single column (took 9 minutes) and even did 
"repair" and "optimize" on the table... so I don't think its the index.  I'm thinking 
its the server config...
- John

[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries
are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 0.75
seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 
   
QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but
it didn't improve anything.  The table has a fulltext index on the column and is
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND

Create procedure problem

2004-08-25 Thread Joe Byers
I want to create a stored procedure that I pass a date variable and two 
database names.  I can not seem to figure the correct syntax to utilize 
these variables in the procedure.  I think I might need to build a 
sqlstring and use an exec command to execute the string.  Here is my 
procedure code:

# requires a table of paths called paths;
create procedure calclmp(in startdate date, in db1 varchar(32), in db2 
varchar(32) );
begin
drop table if exists DB1.lmp_daily, DB1.t6, DB1.tmp_month_hr;
create table DB1.lmp_daily like DB2.pjm_lmp_daily;

/*create table DB1.lmp_daily (_FREQ_ int, lmp_mean real,lmp_std 
real,class_type varchar(7))*/
insert into DB1.lmp_daily
   select if(si.hourtype=1,"OnPeak","OffPeak") as class_type,si.date, 
p.source,p.sink,
   count(si.price-so.price) as _FREQ_, sum(si.price-so.price) as 
lmp_sum,
   avg(si.price-so.price) as lmp_mean,
  
sqrt(variance(si.price-so.price)*count(si.price-so.price)/(count(si.price-so.price)-1)) 
as lmp_std
  from DB1.paths as p,DB2.lmprices_t as si, DB2.lmprices_t as so
  where p.source=so.pnodeid and p.sink=si.pnodeid
 and si.date>= STARTDATE and so.date>= STARTDATE
   and si.date=so.date and si.hour=so.hour and si.hourtype =so.hourtype
  group by si.date, p.source, p.sink, class_type;
insert into DB1.lmp_daily
   select "24H" as class_type, si.date, p.source,p.sink, 
count(si.price-so.price) as _FREQ_,
  sum(si.price-so.price) as lmp_sum,
   avg(si.price-so.price) as lmp_mean,
  
sqrt(variance(si.price-so.price)*count(si.price-so.price)/(count(si.price-so.price)-1)) 
as lmp_std
  from DB1.paths as p,DB2.lmprices_t as si, DB2.lmprices_t as so
  where p.source=so.pnodeid and p.sink=si.pnodeid
 and si.date>= STARTDATE and so.date>= STARTDATE
   and si.date=so.date and si.hour=so.hour /*and si.hourtype 
=so.hourtype */
  group by si.date, p.source, p.sink;

create table DB1.t6 (lmp real, lmp_option_long real, lmp_option_short real)
   select date, source, sink, class_type,lmp_sum as lmp,
   /*max*/if(lmp_sum>0,lmp_sum,0) as lmp_option_long,
   /*min*/if(lmp_sum<0,lmp_sum,0) as lmp_option_short
   from DB1.lmp_daily;
create table DB1.tmp_month_hr (year int, month int, _FREQ_ int, lmp_sum 
real, lmp_mean real, lmp_std real,
   Freq_minus int, freq_plus int,
   lmp_option_long_mean real, lmp_option_long_sum real, 
lmp_option_long_std real,
   lmp_option_short_mean real, lmp_option_short_sum real, 
lmp_option_short_std real,
   lmp_min real, lmp_max real, lmp_option_long_min real, 
lmp_option_long_max real,
   lmp_option_short_min real, lmp_option_short_max real)
   select year(date) as year, month(date) as month, source, sink, 
class_type, count(lmp) as _FREQ_,
   avg(lmp) as lmp_mean, avg(lmp_option_long) as 
lmp_option_long_mean, avg(lmp_option_short) as lmp_option_short_mean,
sum(lmp) as lmp_sum, sum(lmp_option_long) as 
lmp_option_long_sum, sum(lmp_option_short) as lmp_option_short_sum,
   sqrt(variance(lmp)*count(lmp)/(count(lmp)-1)) as lmp_std,
   
sqrt(variance(lmp_option_long)*count(lmp_option_long)/(count(lmp_option_long)-1)) 
as lmp_option_long_std,
   
sqrt(variance(lmp_option_short)*count(lmp_option_short)/(count(lmp_option_short)-1)) 
as lmp_option_short_std,
   min(lmp) as lmp_min,max(lmp) as lmp_max, min(lmp_option_long) as 
lmp_Option_long_min,
   max(lmp_option_long) as lmp_option_long_max, 
min(lmp_option_short) as lmp_Option_short_min,
   max(lmp_option_short) as 
lmp_option_short_max,count(if(lmp>0,lmp,null)) as Freq_plus,
   count(if(lmp<0,lmp,null)) as minus
   from DB1.t6
   group by year, month, source, sink, class_type;
drop table if exists DB1.t6;
end;

I appreciate any suggestions.
Thank you
Joe

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
>Have you checked the "Optimization" section of the manual yet?
>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The 
query is correct.  The fulltext index is correct as I built the fulltext index on the 
single column (took 9 minutes) and even did "repair" and "optimize" on the table... so 
I don't think its the index.  I'm thinking its the server config...

- John



[EMAIL PROTECTED] wrote:
> I'm running into a problem with some queries running on a dedicated mysql server (2.0
> GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
> queries
> are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 
> 0.75
> seconds) but it was only fast for a day or so.
> Here's the rundown:
> 
> TABLE:  fulltext_table (some_id, the_text) 
> Rows: 3,237,981 
> Type: MyISAM
> Size: 920.8 MB 
> 
> QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
> or 
> QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
> AGAINST ('blue') LIMIT 0, 20;
> 
> Both are problematic.  I even tried placing a limit of 2 on the first query but
> it didn't improve anything.  The table has a fulltext index on the column and is
> optimized.  No other users are connected to the server.
> 
> Is there a RED FLAG in here somewhere?
> 
> MySQL configuration settings (using my-huge.cnf template):
> key_buffer = 500M
> max_allowed_packet = 1M
> table_cache = 512
> sort_buffer_size = 10M
> read_buffer_size = 2M
> myisam_sort_buffer_size = 64M
> #thread_cache = 8
> thread_concurrency = 8
> #- Modifications --- #
> ft_min_word_len = 3
> set-variable = table_cache=1024
> set-variable = max_heap_table_size=64M
> set-variable = tmp_table_size=128M
> set-variable = query_cache_limit=2M
> query_cache_type=1
> 
> 
> Performance Test:
> SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
> +--+
> | COUNT(*) |
> +--+
> |95074 |
> +--+
> 1 row in set (27.83 sec)
> 
> Statistics for vmstat 1 (my apologies if this doesn't look pretty):
> ---
> procs  memory  swap  io system cpu
>  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
> 0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
>  0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
>  0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
>  0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
>  0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
>  0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
>  0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
>  0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
>  0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
>  0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
>  0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
>  0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
>  0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
>  0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
>  0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
> procs  memory  swap  io system cpu
>  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>  0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
>  0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
>  0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
>  0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
>  0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
>  0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
>  0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
>  0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
>  0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
>  0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
>  0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
>  0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
>  0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
>  0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
>  0  0  19516  19056  42404 17697520040 0  13240  0  0 97 

Re: PHP / Mysql people question

2004-08-25 Thread jeffrey_n_Dyke



Thought earlier that I could get by on just plain SQL.

> Php needed.  I'm getting a parse error on the "if"
> line.
> This is supposed to be a trigger then before
> transaction counts the number of rows , id already
> inserted, if exceeded , kick back error message , halt
> transaction.
> Not sure where I need to go with this statement to
> make it work. Ideas , help, suggestions welcome!

> //start trigger Trigger1
> function KT_TriggerSTARTER_Trigger1(&$tNG) {
> $result = mysql_query("SELECT LurkTitleTim.LurkID
> WHERE
> LurkID='$new_input'");

> if ($row=mysql_fetch_row($result) >= 5) {  error- data
> already exists   }
>
> else { go ahead and add data with INSERT statement
> }
> }

if htis is exact code, you're error is at "error- data already exists",
which is not a valid statement.
assuming its not exact code, you'll need to send that code to be able to
tell.

HTH
Jeff

> Thank you ,
> Stuart

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



PHP / Mysql people question

2004-08-25 Thread Stuart Felenstein
Thought earlier that I could get by on just plain SQL.
 
Php needed.  I'm getting a parse error on the "if"
line. 
This is supposed to be a trigger then before
transaction counts the number of rows , id already
inserted, if exceeded , kick back error message , halt
transaction.
Not sure where I need to go with this statement to
make it work. Ideas , help, suggestions welcome!

//start trigger Trigger1
function KT_TriggerSTARTER_Trigger1(&$tNG) {
$result = mysql_query("SELECT LurkTitleTim.LurkID
WHERE
LurkID='$new_input'");

if ($row=mysql_fetch_row($result) >= 5) {  error- data
already exists   }

else { go ahead and add data with INSERT statement
}
}

Thank you ,
Stuart

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Have you checked the "Optimization" section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
It's probably the best place to start.
Cheers,
--V
[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0 
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries are taking 5-20 seconds.  Performance was excellent for some reason one day 
(0.2 - 0.75 seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but it 
didn't improve anything.  The table has a fulltext index on the column and is 
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
 0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
 0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
 0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
 0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
 0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
 0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
 0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 4784 root  15   0   488  488   420 S 0.2  0.0   0:00   0 vmstat
 3979 mysql 16   0 68128  52M  2188 S 0.1  2.6   0:06   1 mysqld
 3982 mysql 15   0 68128  52M  2188 S 0.1  2.6   0:05   2 mysqld
1 root  15   0   512  512   452 S 0.0  0.0   0:05   2 init
2 root  RT   0 00 0 SW

Re: Using CREATE PROCEDURE/FUNCTION

2004-08-25 Thread Terry Riley
I think you'll find that 'create procedure' and 'create function' don't 
appear until version 5.0 of MySQL.

Which is why you get a syntax (are you sure you have the right version?) 
error.

Regards
Terry

- Original Message -

> Hi!
> Anybody knows how to use CREATE PROCEDURE and CREATE FUNCTION in 
> 4.1.1-alpha-Max.
> 
> I keep trying the CREATE PROCEDURE and CREATE FUNCTION examples found 
> in the MySQL web documentation, but there's no way to make it work. It 
> always prompts problems with syntax. Is "delimiter" a function working 
> in all versions of MySQL? Is there a bug in this version? Any helping 
> hand?
> 
> Thanks!
> Marti
> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Using CREATE PROCEDURE/FUNCTION

2004-08-25 Thread Boyd E. Hemphill
Marti:

Stored procedures are new in version 5.0.


Best Regards,
Boyd E. Hemphill
MySQL Certified Professional
[EMAIL PROTECTED]
Triand, Inc.
www.triand.com
O:  (512) 248-2287
M:  (713) 252-4688

-Original Message-
From: Marti Quixal [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 25, 2004 8:23 AM
To: [EMAIL PROTECTED]
Subject: Using CREATE PROCEDURE/FUNCTION

Hi!
Anybody knows how to use CREATE PROCEDURE and CREATE FUNCTION in 
4.1.1-alpha-Max.

I keep trying the CREATE PROCEDURE and CREATE FUNCTION examples found in 
the MySQL web documentation, but there's no way to make it work. It 
always prompts problems with syntax. Is "delimiter" a function working 
in all versions of MySQL? Is there a bug in this version? Any helping hand?

Thanks!
Marti


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Using CREATE PROCEDURE/FUNCTION

2004-08-25 Thread Marti Quixal
Hi!
Anybody knows how to use CREATE PROCEDURE and CREATE FUNCTION in 
4.1.1-alpha-Max.

I keep trying the CREATE PROCEDURE and CREATE FUNCTION examples found in 
the MySQL web documentation, but there's no way to make it work. It 
always prompts problems with syntax. Is "delimiter" a function working 
in all versions of MySQL? Is there a bug in this version? Any helping hand?

Thanks!
Marti
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Suppression of result in SELECT @temp := column?

2004-08-25 Thread info
Is there any way to not sending the result of a user variable assignment to
the client?

I.e. Suppress the result of;

SELECT @temp := columnID FROM table WHERE column = whatever LIMIT 0,1

...since I only use @temp in my next statement to produce the actual result.


Also, is there a equivalence to the MSSQL statement;

SET NOCOUNT ON

...in MySQL, which suppresses the "rows affected" messages?

Thanks!

Jens Pettersson


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: AVG Function

2004-08-25 Thread Craig Hoffman
Mark,
Yes its close and thank you.  The problem I am having is I am able to 
generate the correct ranking.id  for that particular user but I can't 
seem to make it equal the ranking.rating.

ID   ranking.rating
9   =  5.6 (example)
Here's my query:
SELECT routes.user_id, ranking.rating, ROUND(AVG(ranking.id), 0) 
avg_ranking, users.username, users.user_id, routes.rating FROM ranking, 
routes, users WHERE username='$username'  AND routes.user_id = 
users.user_id AND ranking.rating = routes.rating GROUP BY 
routes.user_id

//echo some stuff out
echo("".$row["ranking.rating"]." ");
I know I need to make the avg_ranking or the ranking.id = 
ranking.rating but I can't seem to get it work.  Any more suggestions?  
Again thanks for all your help.

-- Craig
On Aug 25, 2004, at 12:48 AM, Mark C. Stafford wrote:
Hi Craig,
It sounds to me as though you're practically there. My syntax is a
little different when I do groupings. Here's what I got from your
question. Is it what you wanted...or close enough to get you where
you're going?
Good luck,
Mark
drop table if exists test.ranking;
create table test.ranking (
  id int(3) unsigned auto_increment primary key
, rating varchar(5)
);
insert into test.ranking(rating) values('5.0');
insert into test.ranking(rating) values('5.1');
insert into test.ranking(rating) values('5.2');
insert into test.ranking(rating) values('5.3');
insert into test.ranking(rating) values('5.3a');
insert into test.ranking(rating) values('5.3b');
drop table if exists test.routes;
create table test.routes (
  user_id int(3) unsigned
, rating varchar(5)
);
insert into test.routes(user_id, rating) values(1, '5.2');
insert into test.routes(user_id, rating) values(1, '5.3');
insert into test.routes(user_id, rating) values(1, '5.3a');
SELECT routes.user_id
, @avg:=ROUND(AVG(ranking.id), 0) avg_ranking
FROM test.ranking
, test.routes
WHERE routes.user_id = 1
AND ranking.rating = routes.rating
GROUP BY routes.user_id
;
SELECT *
FROM test.ranking
WHERE id = @avg
;
+++
| id | rating |
+++
|  4 | 5.3|
+++

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Dynamic Queries followup

2004-08-25 Thread SGreen
 By Jove! I think he's got it! 

BTW - Rhino, that was an EXCELLENT response. 

Stuart, I believe you have seen the light. What you figured out, thanks to 
Rhino and your own intuition, is basically how to normalize your data. 
Normalization is critical to database performance, scalability, and 
stability and the fact that you can see why so soon in your studies bodes 
very well for you. Keep reading, keep trying, and experiment while you 
have the chance. What you learn from this one project should serve you 
well for years to come.

Best Wishes,

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



Stuart Felenstein <[EMAIL PROTECTED]> wrote on 08/23/2004 07:57:52 PM:

> Thanks again for a wonderful detailed response.  The
> part that was still bothering me was the
> multiple-same-category fields.  I get locked into this
> thinking and well, it's difficult to let go.
> 
> Then I tried something on a table that had 
> MemID, Doc1, Doc2, Doc3 
> I had put Doc1 in a few hours earlier.  I went back to
> the form, not an update but thought a regular
> insertion on Doc2 would work, but mysql spit back a
> duplicate record error.
> Then I started thinking, well even if I had an update
> form, would I be able to just enter Doc2, and it
> retained Doc1 from the first insertion.  Then the
> lightbulb went off in my head and it all made sense to
> me.  Everything you said and all the examples.  Plus I
> realized I was getting into a horrible habit early on.
> 
> So to confirm before proceeding.  I have a form where
> I allow 10 categories of "certifications". 
> Now to correct it I'm thinking of creating a
> "certifications" table. 
> Then if ID 15 wants to enter 10 certifications it
> would look like the following in the certifications
> table (one column to search from):
> 
> ID Certification
> 15 C#
> 15 DCOM 
> 15 MCS
> 15 Cisco
> 15 Red Hat
> 15 MySQL
> 15 Oracle 9
> 
> I think this is right.  Please let me know what you
> think.
> 
> Stuart
> 
> 
> 
> 
> --- Rhino <[EMAIL PROTECTED]> wrote:
> 
> > This note is a followup to the thread entitled
> > "Dynamic Queries" which was initiated yesterday by
> > Stuart Felenstein. Stuart, I am deliberately
> > replying to your private followup question on the
> > list rather than privately because this thread could
> > potentially benefit other MySQL users who might be
> > having the same question today or some time in the
> > future. Having this discussion on the list means
> > that others can benefit from it; if we have this
> > discussion privately, you and I are the only ones
> > that benefit. Also, having the discussion on the
> > list is a good sanity check: if I say something that
> > is unclear or even flat-out wrong, there is a chance
> > that others reading this thread will see it and
> > correct us both. Otherwise, I could inadvertenly
> > mislead you and cause you all kinds of grief. 
> > 
> > First things first, Stuart. On re-reading my reply
> > to your question, I came across some typos that
> > could cause confusion so let me take the liberty of
> > repeating my reply, including the relevant bits of
> > your questions, with my amendments in square
> > brackets:
> > 
> > --
> > > Not sure what to call what I'm attempting to do,
> > > decided on dynamic queries. I should mention that
> > I've
> > > been working with databases for just a little over
> > a
> > > month.
> > >
> > > Example could be many web sites, but let's say
> > Expedia
> > > (the travel site..booking flights, cars, etc)
> > > My understaning is that the SQL statements are
> > > contained in a recordset, but variables would be
> > > passed back into the sql statement based on the
> > > particulars of the user's input?  Is that correct,
> > is
> > > that the only way ?
> > >
> > I've been working with relational databases, mostly
> > DB2, since 1985 and I'm
> > not entirely sure of the precise meaning of the term
> > "recordset". I've never
> > seen a formal definition of that term. However, I'm
> > pretty sure that a
> > "recordset" is not the query that you pass to the
> > database but the result
> > that you get back. It's always been my practice to
> > refer to the result of
> > the query as a result set, not a recordset. I'm not
> > trying to tell you not
> > to use the term recordset - it might be widely used
> > by some groups of people
> > for all I know - I'm just trying to explain where
> > I'm coming from.
> > 
> > As to your specific question, yes, SQL statements
> > can be written to contain
> > variables. For example, if you were searching on
> > Expedia, you could have a
> > query that tells the database to return hotel names
> > and addresses given a
> > specific city and date, which the user supplies at
> > execution time. The
> > result set will then contain all of the rows that
> > represent the prices of
> > hotel rooms in that cit

Aggregating functions Enhancement and extension

2004-08-25 Thread Joe Byers
MYSQL new feature

The aggregating functions std and variance calculate
the population standard deviations and variance.  I
work extensively with datasets doing statistical
analysis.  Many times it is faster and more efficient
to perform the basic univariate statistics on the
server but I need the sample standard deviation and
variance.  The differences in the sample and
population statistics is the divisor.  The population
divisor is the count() of the records, the sample is
the count()-1 of the records.  These functions are
STDDev (sample),STDDEVp(population), VAR(sample),
VARP(population) in excel.
 
I can get the sample statistic by creating a column
and in the select statement do
 
 
Would anyone else like this feature?


=
A man is not the center of his universe, rather those he loves are.  So his focus 
should always be on them for they will provide him with love and happiness all of his 
life - Anonymous



___
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
http://promotions.yahoo.com/goldrush

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Figuring out the ranking position of an item on a table given its partnumber

2004-08-25 Thread SGreen
You could simply run your query then use your programming language's 
facilities to scroll down the result set looking for the item in question. 
When you found the right record, you would know what it's ranking was 
based on how many rows of your results you had to scan through before you 
found it.

If you want MySQL to autoassign numbers to rows you need to create an 
autoincrementing integer field and enough other fields on a temporary( or 
privately-named) table to store the results you want ranked.  Let's assume 
that you would like to analyze raw sales ranked in descending volume order

CREATE TEMPORARY TABLE tmpRankings (
rank int auto_increment 
, PartNumber varchar(20)
, Quantity int
, primary key(PartNumber)
, key(rank)
, key(Quantity)
)

INSERT tmpRankings (PartNumber, Quantity)
SELECT part_number, quantity
FROM sales_data_table
ORDER BY quantity DESC


# this query will tell you the rank for a particular part
SELECT PartNumber, rank
FROM tmpRankings
WHERE PartNumber = 'part_in_question'

#this query will give you the 5th through 15th top selling products
SELECT rank, PartNumber, Quantity
FROM tmpRankings
WHERE rank BETWEEN 5 and 15

#this will give you the sales ranking of each product in a list of 
products
SELECT rank, PartNumber, Quantity
FROM tmpRankings
WHERE PartNumber IN ('item1', 'item2', 'item3','item4')

DROP TABLE tmpRankings

This creates a ranking table. Because the first column is 
autoincrementing, we don't fill it with data. The autoinc column becomes 
the sales ranking because we added the raw sales data to the table in 
order of descending sales volume. Select from the ranking table the 
product(s) you are interested in then clean up after ourselves. This table 
could take up some considerable room as I indexed all 3 fields on it for 
the purpose of analysis speed. You could try it without the indices and 
see but I think it would be rather slow. 

It's not all in one step but it's usually pretty quick and very useful.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine





"C.F. Scheidecker Antunes" <[EMAIL PROTECTED]> wrote on 08/23/2004 
07:50:02 PM:

> Hello All,
> 
> I have a table that has a PartNumber and the Quantity of Items sold for 
> each partNumber, that is:
> 
> PartNumber
> Qty
> 
> I need to get its sales raking given its PartNumber, that is. So if I 
> order the table by Qyt in descending order the first
> record will be the partNumber that sold the most. If I want to know what 

> is 123 raking position according to that.
> 
> Is there any easy way to do it?
> 
> Thanks in advance.
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 


Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
I'm running into a problem with some queries running on a dedicated mysql server (2.0 
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries are taking 5-20 seconds.  Performance was excellent for some reason one day 
(0.2 - 0.75 seconds) but it was only fast for a day or so.
Here's the rundown:

TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) 
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but it 
didn't improve anything.  The table has a fulltext index on the column and is 
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?

MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1


Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
 0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
 0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
 0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
 0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
 0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
 0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
 0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2

Statistics for top command:
-
 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 4784 root  15   0   488  488   420 S 0.2  0.0   0:00   0 vmstat
 3979 mysql 16   0 68128  52M  2188 S 0.1  2.6   0:06   1 mysqld
 3982 mysql 15   0 68128  52M  2188 S 0.1  2.6   0:05   2 mysqld
1 root  15   0   512  512   452 S 0.0  0.0   0:05   2 init
2 root  RT   0 00 0 SW0.0  0.0   0:00   0 migration/0
3 root  RT   0 00 0 SW0.0  0.0   0:00   1 migration/1
4 root  RT   0 00 0 SW0.0  0.0   0:00   2 migration

Re: Replication eats 99% CPU

2004-08-25 Thread Chua Choon Keng
At first, I thought it was a low memory problem. I
reduced my buffers in my.cnf and run the test
again. The master still hung and this is a
screenshot of "top" command just before it died:

http://choonkeng.hopto.org/temp/replication-hang.gif

There are still plenty of memory and no disk
swapping when the master goes 99%. Error and slow
logs don't show any useful information.

I am totally clueless now, please share your
opinion. Thanks.


<< Original text follows >>

Hello everyone,

The master is moderately busy with load average of
2-3. This has worked very well for a long time
before I started replication (few months). When
replication is started, both master and slave
appear to work fine with no significant increase of
load on either side. 

But, after a few (random) hours, the master mysqld
process will start to use 99% CPU and the load will
go to 3, 4, 6, 15, 24, 56, 312... in a matter of
seconds! The master then stops responding and needs
a reboot.

My replication setup is as follows:

Master  Slave
 DB A  <==>  DB A
 DB B
 
The master is setup with binlog-do-db=A and slave
with replicate-do-db=A. Queries to A are mostly
SELECT while to B are mostly INSERT, UPDATE &
DELETE.

The queries that cause 99% CPU, as stated in
slow.log, are not really CPU intensive. (slow.log
indicates these queries took HOURS to run where
they would normally finish in 1 or 2 seconds).
These are the queries we run every minute and
second, mysql handles these with no problem, why
suddenly mysqld eats all CPU?

Can anyone shed some light?

Thanks in advanced.

Regards,
CK



___
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
http://promotions.yahoo.com/goldrush

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Testing for the existence of an index

2004-08-25 Thread Jesse Sheidlower
On Tue, Aug 24, 2004 at 11:57:05AM +0200, Thomas Spahni wrote:
> Jesse,
> 
> mysql> SHOW INDEX FROM mytable;
> 
> gives you all indexes for `mytable`; you can process the results with
> perl.

Thanks very much. This works fine, and since I don't care about the
return value--just that there is one--it becomes trivial.

Jesse Sheidlower

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Limiting record inserts by user

2004-08-25 Thread Neculai Macarie
> where ID = count() <= 5 , if ..
> 
> In other words Count wouldn't be looking for an ID
> with a value of 5, but the number of records the ID
> has in the table .

SELECT count(*) AS users_records_number
FROM table
WHERE user_id = 5;

-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Limiting record inserts by user

2004-08-25 Thread Stuart Felenstein
I want to , need to, have a way to check how many
records a user can insert into a table.
I thought it would be a, and maybe it is , a statement
using count().  Can count be used though if I want to
check by the user's  ID ? 
In other words, something like:

where ID = count() <= 5 , if ..

In other words Count wouldn't be looking for an ID
with a value of 5, but the number of records the ID
has in the table .

Stuart

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MIN and JOIN -> USING TEMPORARY

2004-08-25 Thread Marc Debold
Hi there,
 
I have trouble with a SQL statement that uses too much load on our
server due to heavy traffic. MySQL uses temporary files and filesort, so
I narrowed the problem down to this one here:
 
 
TABLE A:
ID  INTEGER   PRIMARY KEY
TEXTVARCHAR(10)
 
TABLE B:
ID  INTEGER   PRIMARY KEY
REF_ID  INTEGER
NUMBER  FLOAT(7,2)
 
No matter, how many entries I have in A and B and no matter what indexes
I create, I always get a USING TEMPORARY, USING FILESORT when EXPLAINing
the following simple statement:
 
SELECT a.id, MIN(b.number) AS low FROM a JOIN b ON (a.id = b.ref_id)
GROUP BY a.id ORDER BY low
 
What can I do to speed up this query? I need to get a list of rows from
table a with it's lowest reference number from table b.
 
Thanks in advance,
Marc


Re: One form multiple inserts

2004-08-25 Thread Stuart Felenstein
Never mind, finally figured it out on my own.  
Sorry , i think out loud sometimes!

Stuart

--- Stuart Felenstein <[EMAIL PROTECTED]> wrote:

> Trying to figure out how this gets done. Let me
> explain first. By the way I hope this is not off
> topic
> as perhaps it is more towards application then core
> db
> .
> 
> I have a table:
> 
> Count[int][auto-increment], MemberID[int],
> Title[varchar], TitleYear[int]
> 
> In my organization members can have or have held
> various titles.  I'm trying to collect, right now, 5
> titles and the years they've held those titles.
> 
> Extraneous - Now I'm in a precarious place, cause I
> am
> new to web dev and db and have had the fortunate
> experience to be using an extension to dreamweaver
> that makes it relatively simple.  Until you have to
> go
> beyond what they provide in functionality /
> capability. It's php / adodb.
> 
> So I"ve looked around and found a few "methods" but
> no
> detail, nor have I really drilled down on how they
> get
> implemented. 
> 
> One method was basically a loop of sorts.  Not
> entirely sure but I suppose pages can keep
> refreshing
> 5 times.
> Second, was to pass all the variables over to
> another
> page.  It sounded like maybe this secondary page is
> hidden but set up to accept an insert.  The third,
> which I tried, unsuccessfully, was through the use
> of
> after triggers.  Possible I need to work on this
> more.
> 
> Anyway I wouldn't mind hearing how other people deal
> with this issue.  Not sure if I could set up some
> SQL
> statements.
> 
> Thank you ,
> Stuart
> 
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing fields of constant length [solved]

2004-08-25 Thread Sebastian Haag
My tounge spoke too early, sorry!

Sebastian Haag said:
> Hello,
>
> is it possible to import data into a MySQL-DB from a text-file which does
> not have separating characters (like a comma, semicolon or tab)?
>
> I have a .csv-file in which each column has a different constant lenght
> (so many characters or digits) that I need to import.
>
> For example:
>
> 00721617  20040625  1000  Z
> 00721617  20040626  1000  Z
> 00721617  20040625  1000 OP
> ...
>  ^10   ^20^30
>
> Now I would like to read this into my database into certain fields. First
> four digits should go to Field1, the next six should go to Field2 and so
> on.
>
> I checked out the LOAD DATA INFILE syntax but couldn't find anything. I
> cannot use the FIELDS TERMINATED BY option.

Not true:

http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html
/*little more than halfway down the page*/

It says:

If the FIELDS TERMINATED BY and FIELDS ENCLOSED BY values are both empty
(''), a fixed-row (non-delimited) format is used. [...] For example, if a
column is declared as INT(7), values for the column are written using
seven-character fields. On input, values for the column are obtained by
reading seven characters. LINES TERMINATED BY is still used to separate
lines. [...]


>
> Is there a straightforward way to do this or do I have to load it into one
> large field and seperate the fields by using MySQL's string functions?
>
> Thanks in advance for any suggestions!
>
> Sebastian
>
> --
>
> Once a problem is defined - it is half way solved. (Henry Ford)
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>


-- 

Once a problem is defined - it is half way solved. (Henry Ford)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing fields of constant length

2004-08-25 Thread Sebastian Haag
Hello,

is it possible to import data into a MySQL-DB from a text-file which does
not have separating characters (like a comma, semicolon or tab)?

I have a .csv-file in which each column has a different constant lenght
(so many characters or digits) that I need to import.

For example:

00721617  20040625  1000  Z
00721617  20040626  1000  Z
00721617  20040625  1000 OP
...
 ^10   ^20^30

Now I would like to read this into my database into certain fields. First
four digits should go to Field1, the next six should go to Field2 and so
on.

I checked out the LOAD DATA INFILE syntax but couldn't find anything. I
cannot use the FIELDS TERMINATED BY option.

Is there a straightforward way to do this or do I have to load it into one
large field and seperate the fields by using MySQL's string functions?

Thanks in advance for any suggestions!

Sebastian

-- 

Once a problem is defined - it is half way solved. (Henry Ford)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Migration tools/plan from oracle 7.3 to Mysql 4.0

2004-08-25 Thread Karam Chand
Give a look to SQLyogs ODBC Import Tool. It can be
found at http://www.webyog.com

Regards,
Karam
--- [EMAIL PROTECTED] wrote:

> 
> Hi,
>I have to migrate oracle 7.3 database to
> MySql , for oracle used Pro*C.
> In the front end used Java, JNI, RMI and Pro*C. 
> In the migration process what will be the best
> equivalent of Pro*C in MySql.
> Ans also need to use XML, for the above environment
> what are the best langauge suit for XML handling(
> c++ or java) and what will be the best parsers for
> XML (like jaxp ..etc). Please advise me here.
>  
> Thanks,
> Narasimha
> 91 98456 82459
> 
>   -Original Message- 
>   From: Lakshmi NarasimhaRao (WT01 - TELECOM SERVICE
> PROVIDER) 
>   Sent: Tue 8/17/2004 3:08 PM 
>   To: [EMAIL PROTECTED]; [EMAIL PROTECTED] 
>   Cc: 
>   Subject: RE: Migration tools/plan from oracle 7.3
> to Mysql 4.0
>   
>   
> 
> 
>   Hi,
>Thank you for your response. Could any one
> give more clarification on the alternatives for
> Views, Stored procedures, triggers and contraints of
> oracle in Mysql 4.0 classic, as MySql 4.0 classic
> wonot support the InnoDB storage engine.
>   
>   Thanks,
>   Narasimha
>   
>   -Original Message-
>   From: Martijn Tonies
> [mailto:[EMAIL PROTECTED]
>   Sent: Tue 8/17/2004 12:59 PM
>   To: Lakshmi NarasimhaRao (WT01 - TELECOM
> SERVICE PROVIDER); [EMAIL PROTECTED]
>   Cc:
>   Subject: Re: Migration tools/plan from
> oracle 7.3 to Mysql 4.0
>  
>  
>   
>   First: please don't hijack threads... And
> don't quote them completely
>   if you do.
>  
>   >  I am new to MySql. Could you
> please let me know the different
>   migration tools avialable for the migration
> of oracle 7.3 database to mysql
>   4.0 classic version.
>  
>   You could try the Schema Migrator tool in
> Database Workbench
>   ( www.upscene.com ) - it supports ADO and
> ODBC connectivity
>   to MySQL. Might not be perfect, but sure
> gives you a good start.
>  
>   >Out of those which one is the best in
> performance and cost wise. As per my
>   knowledge views, stored procedures and
> triggers are not supported in MySql
>   4.0. Is it correct?.
>  
>   That is correct. Add to that: no CHECK
> constraints either.
>   Foreign Key Constraints and transactions
> are only supported with
>   the InnoDB storage engine - read about that
> in the documentation.
>  
>   If so, what are all the alternatives for
> converting the views, stored
>   procedures and triggers in oracle 7.3 to
> MySql 4.0. I will be very happy for
>   giving me a very good explanation on this.
>   >
>  
>   With regards,
>  
>   Martijn Tonies
>   Database Workbench - developer tool for
> InterBase, Firebird, MySQL & MS SQL
>   Server.
>   Upscene Productions
>   http://www.upscene.com
>  
>  
>   --
>   MySQL General Mailing List
>   For list archives:
> http://lists.mysql.com/mysql
>   To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
>  
>  
>   
>   
>   
>   
>   
>   Confidentiality Notice
>   
>   The information contained in this electronic
> message and any attachments to this message are
> intended
>   for the exclusive use of the addressee(s) and may
> contain confidential or privileged information. If
>   you are not the intended recipient, please notify
> the sender at Wipro or [EMAIL PROTECTED]
> immediately
>   and destroy all copies of this message and any
> attachments.
>   
>   
> 
> 
> 
> 
> Confidentiality Notice 
> 
> The information contained in this electronic message
> and any attachments to this message are intended
> for the exclusive use of the addressee(s) and may
> contain confidential or privileged information. If
> you are not the intended recipient, please notify
> the sender at Wipro or [EMAIL PROTECTED]
> immediately
> and destroy all copies of this message and any
> attachments.
> 




___
Do you Yahoo!?
Win 1 of 4,000 free domain names from Yahoo! Enter now.
http://promotions.yahoo.com/goldrush

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]