Connect fails but only from this one file WHY

2005-10-31 Thread Gerald Taylor

i am developing a php app.  in the beginning of every file there is
a require_once(./setup.php);  that does the housekeeping, fetches the
connection credentials and connects to the database.  All that worked 
fine until this morning I made a new file.


Now I get a connection failure when I try to connect  but ONLY from this
one php file.  I can connect to the server fine manually, the server is 
not unduly loaded, all the other apps that use this db server are fine
and even the other files in THIS app using THIS particular db seem to be 
connecting fine.  Well actually I should say that one of the other files 
*did* get this error once but on a subsequent call it corrected itself. 
 I should also say that the bad file did connect fine the first time 
I ran it but I haven't been able to make it do it since.  Is it possible 
that the bad new file is doing something bad that boggles subsequent 
connections?   If so what?


It is using exactly the same include file that works in all the other 
files.I'm  stumped  Thank you...



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to get a count from this query

2005-10-13 Thread Gerald Taylor

 SELECT avg(abs(pmd.value) - (mypmd.value))) as diff, pm.mname, pmd.uid
 FROM pmdata pmd, mnames pm, pmdata mypmd
WHERE mypmd.uid= ?
AND pmd.pmid=pm.pmid AND mypmd.pmid=pm.pmid
AND pmd.uid != mypmd.uid GROUP BY pmd.pmid, pmd.uid ORDER BY pmd.uid

What I would like also to return in this query is a COUNT of the number 
of pmd.uid of each different value  so I know how many values I got from 
uid#1, uid#2 etc.


I tried putting COUNT(pmd.uid) as numdelta but it just gave me a 1 in 
every row.   The problem is I am not doing a straight GROUP BY pmd.uid






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Suppress table header when using ODBC

2005-10-11 Thread Gerald Taylor

Gleb Paharenko wrote:

Hello.



I am using a desktop program that imports data from a mysql



What program?



Printbench Pro


for one  little glitch:  it adds one extra row at the beginning of
the dataset with the names of the columns in it.   I need for that



Does you program show column names in the numeric fields?

In a manner of speaking, Yes.  When you DO the query it shows all the 
data in spreadsheet fashion.  The columns all have headers and you can't 
edit it.  THen when you go to layout, all the field headers are used as 
placeholders and may be drug around and formatted in the layout.


I have also contacted Elkriver tech support about this, but I thought
it might be an issue I could solve with a mysql solution.

The problem is I want to bar encode one of the fields, and the bar
encoding I want to use is numeric only.  So it craps out the whole
column just because the first row has a non numeric field header.  and 
that is why I need it to go away.


Gerald Taylor wrote:


Hello,

I am using a desktop program that imports data from a mysql
database using the ODBC mysql driver and everything works fine except
for one  little glitch:  it adds one extra row at the beginning of
the dataset with the names of the columns in it.   I need for that
column name row to NOT be there.  Do I do something to the query to
suppress is or is it some setting I set up?  I've googled and nothing.
MySQL 4.1 if it matters.  I know I remember reading somewhere how
to suppress this.   Thanks











--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Suppress table header when using ODBC

2005-10-10 Thread Gerald Taylor

Hello,

I am using a desktop program that imports data from a mysql
database using the ODBC mysql driver and everything works fine except 
for one  little glitch:  it adds one extra row at the beginning of

the dataset with the names of the columns in it.   I need for that
column name row to NOT be there.  Do I do something to the query to
suppress is or is it some setting I set up?  I've googled and nothing.
MySQL 4.1 if it matters.  I know I remember reading somewhere how
to suppress this.   Thanks



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor
I just revived a database that was in a version 3.23 server and moved it 
to a 4.1   There are big fields of TEXT based data.  They have a way of 
compressing the amount of TEXT data by identifying common subchunks and 
putting them in a subchunk table and replacing them with a marker 
inside the main text that will pull in that subchunk whenever the parent 
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of 
duplicated chunks from one record to another.  The client does not want 
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it 
compressed, and oh well I look on it as an opportunity for some 
housecleaning.  Now that we have 4.1 what is the best practice for 
automated looking for common subchunks, factoring them out, and then 
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The 
rest is easy.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: best practices for finding duplicate chunks

2005-08-14 Thread Gerald Taylor

Thanks for your answer.  It would certainly work provided having
enough disk space to do that.  I thought something like
that but was hoping I can leverage fulltext  and just
record the fulltext result between a each record
and each other record. Then I can group all records that
highly correlate and maybe do a much smaller scale version of
the brute force indexing thing that you are proposing, i.e. only
do it on a group of records that we already know  have a high
correlation, ie a high probability of sharing a chunk in common
  Then when done I can throw away that data
and do another group.  What do you think?   Processing cycles I have
but easy disk space I don't.

Alexey Polyakov wrote:

There's no easy way to do it I think. But if spending a few hours
(days?) programming is ok with you I'd suggest something like this:
1) create a table (let's call it hashes) with three columns: hash,
doc_id, pos_id (doc_id is an identifier for records from table with
big text chunks)
2) retrieve a record from big table. Calculate hash value for
concatenated first 20 words from text. Insert this value into
hash/doc_id table, and 1 as value of pos_id. Calculate hash for
concatenated 20 words starting from 2-nd word of this text, and also
insert it into hash/doc_id table (2 as value of pos_id). Repeat until
you reach the end of this text.
3) Repeat 2) for all records of big table
4) Now you have all data needed for identifying those duplicate chunks.
select count(doc_id) as c from hashes group by hash where c1;
will return all hashes for 20-word chunks that are found in 2 or more documents
select doc_id from hashes where hash=some_value; 
will return documents that contain this chunk.

select h1.pos_id, h2.pos_id from hashes h1, hashes h2 where
h1.doc_id=doc1 and h2.doc_id=doc2 and h1.hash=h2.hash order by
h1.pos_id;
will return word positions for duplicate text in two documents.
For example last query returns:
156 587
157 588
...
193 624
It means that you can take words 156-213 from doc1, insert it into
subchunks table, and replace words 156-212 at doc1 and words 587-643
at doc2 with a marker.


Yeah it looks ugly, and will take a lot of space for temporary data.
But in the end you'll have all 20+ words duplicate chunks properly
identified.

On 8/14/05, Gerald Taylor [EMAIL PROTECTED] wrote:


I just revived a database that was in a version 3.23 server and moved it
to a 4.1   There are big fields of TEXT based data.  They have a way of
compressing the amount of TEXT data by identifying common subchunks and
putting them in a subchunk table and replacing them with a marker
inside the main text that will pull in that subchunk whenever the parent
chunk is requested.  This subchunking seems to have been done kind of
ad hoc, because I've noticed the database still has quite a bit of
duplicated chunks from one record to another.  The client does not want
to buy another drive to store data (even tho he really should for
other reasons anyway but who cares what I think) , so he wants it
compressed, and oh well I look on it as an opportunity for some
housecleaning.  Now that we have 4.1 what is the best practice for
automated looking for common subchunks, factoring them out, and then
replacing the original parent text with itself with the chunk cut out
and a marker inserted.  The hard part is finding them, ovbiously.  The
rest is easy.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]










--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL constraint question

2005-02-23 Thread Gerald Taylor
Thanks for these answers.  I can now write this in the application and
be OK with it without that nagging feeling that somebody will say upa
if you just put xxx in the database then you could filter all that in 
your query.  If it ever happens I can say, ah yes but when I wrote that
we were only on version 4.

When you have a problem set that pushes the envelope of what it
can do... that's how people get the ideas to make it do a new level of 
things in the future, such as you are envisioning.

Martijn Tonies wrote:
I have a database thats full of ingredients that are placed in various
categories.  and then there are mixtures that are allowed to
be labelled with a certain grade based on the quality and
composition of the ingredients it is comprised from.
But the formulas are not always the same, as long as the profile matches

certain criterian.
For example, in order to be label grade Premium,
it must have between 70% and 95% ingredients from group A
between 0 and 15 % from group B
between 5 and 15% from group C
between 0 and 15% from group D
What you actually put in the mix is determined by price
and availability and other factors.
So I  implement these profiles with a mySQL table
I have
grade_id   points to main grade record
category_idpoints to category
min_percent
max_percent
and the individual ingrediants
ing_id
name
price
category_idamong others...
The question is
Is there some way mysql can assure the integrity of these profiles, so
that there is no way to use a set of records for a grade that can't add
upto 100%. Or is this pretty much application logic?

Nope, the combined total  100% calculation will have to be
application-based as it requires validating an inserted or updated row
based on the value(s) contained in other row(s). CHECK constraints, which
are in the development pipeline, can only be used to validate a row
against constant values or some combination of values from within the row
being evaluated. Any time you need to compare a group of rows in order to
validate the group, you have left the automation of SQL logic and are in
the realm of application logic, as you guessed.

Disagreed :-)
A decent database system would be able to create multi-row check
constraints - there are several types of constraints.
1) column constraints
2) table constraints
3) database/schema constraints
These would fall under (3) I guess.
Firebird allows queries in its check constraints, but only enforces
the constraints at INSERT or UPDATE time.
A database system that would support deferred constraints should
be able to create multi-table, multi-row check constraints just fine.
However, I don't know any DBMS that currently does that. Perhaps
Mimer or ThinkSQL...
With regards,
Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL constraint question

2005-02-18 Thread Gerald Taylor
I have a database thats full of ingredients that are placed in various
categories.  and then there are mixtures that are allowed to
be labelled with a certain grade based on the quality and
composition of the ingredients it is comprised from.
But the formulas are not always the same, as long as the profile matches 
certain criterian.

For example, in order to be label grade Premium,
it must have between 70% and 95% ingredients from group A
between 0 and 15 % from group B
between 5 and 15% from group C
between 0 and 15% from group D
What you actually put in the mix is determined by price
and availability and other factors.
So I  implement these profiles with a mySQL table
I have
grade_id   points to main grade record
category_idpoints to category
min_percent
max_percent
and the individual ingrediants
ing_id
name
price
category_idamong others...
The question is
Is there some way mysql can assure the integrity of these profiles, so 
that there is no way to use a set of records for a grade that can't add 
upto 100%. Or is this pretty much application logic?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Tricky self join query help?

2004-11-10 Thread Gerald Taylor
I have this table of events. Each event has an owner
id and the time that it happened.
What I want to do is delete all events
more than three months old  but only if the owner does not own
any newer events.
The coolest would just be a single DELETE query.
Can this be done?
Mysql 4.0.18


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Tricky self join query help?

2004-11-10 Thread Gerald Taylor
The parts I am interested in:
(I won't bore you with the fields not relevant to this problem )
CREATE TABLE events (
  e_id int(15) NOT NULL auto_increment,
  e_owner int(15) NOT NULL default '0',
  e_time int(15) NOT NULL default '0',
  other junk omitted
  PRIMARY KEY  (e_id)
) TYPE=MyISAM;
Thanks
And I am liking that other answer although it has
all nulls in the second owner column and I don't get how it works.
When I write applications that delete I always
back up the table and use a copy or a small
sample on a play database.
[EMAIL PROTECTED] wrote:
If you post the table structure (SHOW CREATE TABLE tablename\G) we could 
help you write this statement.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Gerald Taylor [EMAIL PROTECTED] wrote on 11/10/2004 11:52:35 AM:
  I have this table of events. Each event has an owner
  id and the time that it happened.
 
  What I want to do is delete all events
  more than three months old  but only if the owner does not own
  any newer events.
 
  The coolest would just be a single DELETE query.
 
  Can this be done?
  Mysql 4.0.18
 
 
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Script question

2004-10-20 Thread Gerald Taylor
Philippe Poelvoorde wrote:

mysql system echo hi there;
mysql system ls -ls;
mysql system uname-a;
all work on the 2.4.22-10mdk kernel and
the semi-colon is optional.
mysql system echo hi there
should be also working, the first parameters 'echo' is recognized as the 
  command to execute, and the folowings strings the parameters of this 
command. so since 'echo hi there' is not a command it wasn't working 
(try, doing `$echo\ hi\ there` at your prompt :)
Since some commands require quotes, it could be useful.
(system doesn't work with MySQL 3.23, does it ?)

4.0.18, doesnt seem to work 4 me.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Query optimization question

2004-10-04 Thread Gerald Taylor
Query optimization question
I am  selecting from a single table  but it has a lot of rows and it has 
a very involved calculation.  What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3 degrees.

Using 4.0.20
A sample query  is given here:
The application interpolates variable values such as 44.6 into
the query string, so from mysql's
point of view they are constants, right?  And the explain doc
says it optimizes constants, but  it is looking at all the rows
and I see why.
SELECT city, state, country, latitude, longitude,
IF(latitude REGEXP '[0-9]+$' AND longitude REGEXP'[0-9]+$',
ROUND(DEGREES(ACOS((SIN(RADIANS(44.6))) *
(SIN(RADIANS(latitude))) +
(COS(RADIANS(44.6))) *
(COS(RADIANS(latitude))) *
(COS(RADIANS(-123.28 -longitude)
 * 111),) as distance  FROM londata
WHERE ABS(44.6-latitude) = 3.0 AND ABS(-123.28-longitude) = 3.0  ORDER 
BY distance;

I guess I can't do a subselect with my version...
If I could what would it look like?
Something like below?   (I might be able to talk
the powers that be into an upgrade.)  And if I can't
is it more horrible to manually create a temporary table
and perform the calculations on it  than it is to
just do what I am doing?
SELECT city, state, country, latitude, longitude,
 IF(latitude REGEXP '[0-9]+$' AND longitude REGEXP'[0-9]+$',
 ROUND(DEGREES(ACOS((SIN(RADIANS(44.6))) *
(SIN(RADIANS(latitude))) +
(COS(RADIANS(44.6))) *
(COS(RADIANS(latitude))) *
(COS(RADIANS(-123.28 -longitude)
 * 111),) as distance  FROM (SELECT * FROM  londata
   WHERE ABS(44.6-latitude) = 3.0 AND ABS(-123.28-longitude) = 3.0) 
as
   sublon  ORDER BY distance;

Thanks.
GT
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SELECT queries on replicate DB server

2004-09-28 Thread Gerald Taylor
Thanks, that is a really good answer.  Raises a bunch more
questions but they're good ones.
Jim Grill wrote:
My question is:
Why would they deem it necessary to use yet a third server?   Could it
be because the main server and the main slave are constantly being
updated and they wouldn't want to overload the main slave(which
is not on as high a horsepower of a box I know for a fact).  Could it
be because maybe the subset of tables that they put on the third server
are relatively more stable and hence there arent so many writethroughs
so it can handle the complex selects better.
All theories gladly accepted...
I'm not too sure about the third server either, but I do have an idea. It
wouldn't make much sense if the third server had different data on it. That
would tend to make things difficult to keep up to date - or maybe not. It
might be a slave that they only connect to the master every so often.
It's very common to have applications that write to one server and read from
a slave server. Sometimes many slave servers since there are typically way
more reads than writes.
Perhaps they use the third server so that if the master or slave servers die
there will always be a spare server for reads.
As far as any difference in the tables on the third server... Since it is
doing selects only you can start a slave server with a few options to speed
things up like:  --skip-innodb, --skip-bdb, --low-priority-updates,
and --delay-key-write=ALL which will force the server to use
non-transactional MyIsam tables for better performance.
It's really tough to speculate. Every system administrator would probably do
it a different way.
Jim Grill


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


SELECT queries on replicate DB server

2004-09-27 Thread Gerald Taylor
 We have a main db server that has a pretty big db and it sometimes gets
overloaded.  We also have a replicated slave server... every update to 
the main server writes through to it and it is the one that is backed up 
to avoid interruption service for the website.

There are certain sections of the website where all the queries are 
select only. So what was done was to replicate the affected tables on 
yet a third server.  Now we're set up so that when that block of code is 
invoked, it connects the third db server to take the load off the main 
one, knowing ahead of time that there won't be any updates to it,
everyone is happy tra la la la life goes on.

My question is:
Why would they deem it necessary to use yet a third server?   Could it
be because the main server and the main slave are constantly being 
updated and they wouldn't want to overload the main slave(which
is not on as high a horsepower of a box I know for a fact).  Could it
be because maybe the subset of tables that they put on the third server
are relatively more stable and hence there arent so many writethroughs
so it can handle the complex selects better.

All theories gladly accepted...
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL book

2004-07-26 Thread Gerald Taylor
I totally recommend Paul Dubois's book.
Excellent book I own the first edition.
 I bet the second edition is just
as good and more up to date.
Paul DuBois wrote:
At 1:55 +0200 7/26/04, Schalk Neethling wrote:
Can anyone suggest o great book to learn MySQL inside out? I am 
thinking of getting: *MySQL By* Paul DuBois 
http://www.informit.com/safari/author_bio.asp?ISBN=0735709211 - New 
Riders Publishing

That's the first edition.  I would suggest getting the second edition
instead. :-)  (http://www.kitebird.com/mysql-book/)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: query gets count wrong

2004-07-12 Thread Gerald Taylor
I am an anal single query-oholic.  I know I could do this in 2 queries
I have a query involving several related tables and I have attempted
to reduce it down to what causes not what I want results.
I am attempting to fill a summary table.
For each main item in this table I want to count the number of child 
items in a related table that point to it.  This is fine.
I have a third table called ratings which rates individual child items
and it is the problem.

The table qxe contains e_id which is a foreign key to the main table
and q_id which is a foreign key to the actual child items. I had to set 
things up this way because a q might belong to more than one e.  Right 
now I am not even interested in anything about the q's data I just want
to count them.

The ratings table is the problem because any q can have an arbitrary 
number of ratings.  so a rating has q_id and e_id as well as another key 
that combines to form a multipart key.

instead of the number of q_ids in the qxe table that have e_id equal to 
the the current e_id, I am getting as e_count the total number of 
ratings for that e_id which is a huge humber.  I know I need another
constraint but I cant figure out what it is.   What constraint can I add 
to make this query do what I want while still being able to average
the ratings.

SELECT e.e_id, e.e_code, COUNT(qxe.q_id) as e_count, 
avg(ratings.r_quality) as avqual
 FROM  e, qxe, ratings
 WHERE e.e_id = qxe.e_id
	 AND ratings.e_id = e.e_id
	 AND ratings.q_id = qxe.q_id
 AND ratings.e_id = qxe.e_id
 GROUP BY e.e_id;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


selecting by average

2004-05-19 Thread Gerald Taylor
I have a table called ratings.
It has 3 rows of interest
id  foreign key to another table
quala quality rating
u   the user who gave that rating
(also a foreign key into the users table)
Now I would like to select all the  unique ids for which the average
quality is below 1.5
so if we have
id  qual   u
1   5  999
1   4  888
2   1  999
2   1  888
3   3  777
3   2  888
it would tell me that id 2 has average ratings below 1.5
it doesn't like SELECT id from ratings where AVG(qual)  1.5 group by id
unfortch this server is still 3.23

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Order by price?

2004-04-29 Thread Gerald Taylor
I want to count the passing and failing scores of a group of tests
so I have a table with a row that describes each  test
Each test has a minimum passing score.  Each test can be run an 
arbitrary number of times so I have a table of scores, which uses
test id as a foreign key.

what I would like to do is count the count of fails and passes.
ideally in a single query.
so the test table lookes like this
--
t_id
.
.
.
.
min_pass_score
---
and the score table looks like this:

---
score_id
t_id
score
.
.
.
---
I would like to select so the result set looks like this:

t_id  no_passes  no_fails
-
1  5  6
2 12  4
3192 34
.
.
.
How can I do this?  I have access to Mysql 4.0.18
although the production server still runs 3.23




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: bookings

2004-03-02 Thread Gerald Taylor
Kevin Waterson wrote:
I am (trying) to make a booking systems.
Currently I have a table with 3 timestamps that record
bookingDate, bookingFromDate and bookingToDate
I have another table that records 'seasons'.
This table contains two timestamps that record
seasonStartDate and seasonEndDate
also I have the rates as seasonRateWeekly and seasonRateNightly
What I need to do, is INSERT INTO bookings the bookingFromDate and
bookingToDate, no problem there, but I need to know what rate to 
charge them. and if the booking dates overlap seasons, the appropriate
rate needs to be applied.

All replies greatfully recieved,
Kevin
That sounds like a job for your application so I would just do a few
tests on the dates you received from the user before you try to insert
a booking record in your database.
like (this is PHP, I don't know what you use but the logic is the same 
anyway)

  if (($s = season($startdate)) == ($e=season($enddate))) {
  if interval($startdate,$enddate,days)  7) {
 $rate = $weekly[$s];
  }
  else {
 $rate = $nightly[$s];
  }
  } else {
  //it straddles two seasons... do something
  }
then INSERT into bookings ...

I assume bookingDate is
like today's date, and FromDate and ToDates are the date the facility
is requested...  You need to define what you want to do for a weekly booking
that straddles seasons, use the higher rate or the lower rate or average 
them or something

Also just to make processing
easier later on you might want to define a flag to indicate whether they are
getting a weekly or nightly bill
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: bookings

2004-03-02 Thread Gerald Taylor



How would I benifit from a stored procedure?
Stored procedures live with your data and can be called
directly from inside queries.  They can be customized to do what
you want to the data before mysql gives it to you (SELECT) or after you
feed it in, which is what you want for an insert.
so your stored procedure would be something like
optimal_rate(date1,date2) where it did the logic I said before
only it runs in your database server instead of your application
(e.g. web server), and is more closely bound to the data.
so if you had such a stored procedure all you would have to
do when a user inputs booking dates is turn around and say
something like insert bookings (bookstart,bookend,rate) VALUES 
($start,$stop,OPTIONAL_RATE($start,$stop))

well you'd need to get the syntax right but your application code
would be much simpler and the crunching would be done by the DBMS
instead of your application.  especilly nice if the DBMS is on sombody
else's nickel.;)
Like the previous poster said, for now version 5 is still a little on
the bleeding edge to use for production but I've heard
it supports stored procedures.  I used them in postgres and they are a 
pain to write but nice once you get them.

Kind regards
Kevin


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: default encrypt for PASSWORD

2004-02-25 Thread Gerald Taylor
Paul DuBois wrote:
At 15:02 -0700 2/25/04, Colleen Dick wrote:

I'm sure this is in the manual somewhere or in the archives, but
I'm not finding it and I bet someone easily knows the short answer:
Using 3.23
setting a varchar field to PASSWORD(secret)
Having altered nothing regarding encryption in the server
what is the default encryption type for PASSWORD?
cuz I told PEAR::Auth that the encryption type is md5
and if that is wrong that could be the reason why it is not
working.


If you want MD5 encryption, you could use the MD5() function.

PASSWORD() uses encryption that is intended for use with MySQL
account management, not for general application encryption.
http://www.mysql.com/doc/en/Encryption_functions.html

Ah... so I shouldn't use it except for the grant tables..I guess I 
just carried my admin habits over into the app arena. OK then, MD5 is
fine with me.  Poof they're all changed.  And the users authenticate 
too.  THANKS FOR YOUR ANSWER!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]