Re: Dynamic ORing

2007-07-10 Thread Dave G
Prepared Statements with the IN function worked beautifully.  Performance
is slightly degraded, but I get the functionality I want and can live with
the slight performance hit.

Thanks again.

Dave

> Hi Dave,
>
> Dave G wrote:
>> Hello all,
>>
>> I'm trying to do some processing on the front end to optimize my query
>> on
>> the backend.  I would like to generate a list of id's for this query
>> like
>> so:
>>
>> SELECT REPLACE('3,4,5,6,7,8,9',',',' OR element_id=') INTO @tmp;
>>
>> Then use it like:
>>
>> mysql> select @tmp;
>> +---+
>> | @tmp
>>  |
>> +---+
>> | 3 OR element_id=4 OR element_id=5 OR element_id=6 OR element_id=7 OR
>> element_id=8 OR element_id=9 |
>> +---+
>> 1 row in set (0.00 sec)
>>
>> mysql> select * from display__Element where [EMAIL PROTECTED];
>
> MySQL sees that element_id is a number and coerces @tmp into a number.
> Check the
> output of 'show warnings' immediately after this statement.  Another way
> to see what's
> happening is to run 'select 0 + @tmp'.  MySQL is truncating after the
> first non-digit
> characters.
>
> What you want to do is use a prepared statement; prepare the statement as
> concat("select...", @tmp) and then execute the result.
>
>> ++--++
>> | element_id | display_name | data_units |
>> ++--++
>> |  3 | Sync ||
>> ++--++
>> 1 row in set (0.00 sec)
>>
>> mysql> select * from display__Element where element_id=3 OR element_id=4
>> OR element_id=5 OR element_id=6 OR element_id=7 OR element_id=8 OR
>> element_id=9;
>
> A better approach is to use an IN() list, which MySQL might even be able
> to optimize
> better:
>
> WHERE element_id IN(3,4,5,6,7,8,9);
>
> DANGER: do NOT use an IN() subquery, of the form
>
> WHERE element_id IN(select element_id from some_other_table)
>
> MySQL optimizes this kind of query very poorly.
>
> Baron
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Dynamic ORing

2007-07-10 Thread Dave G
Thank you Baron, I'll give that a shot, and thanks for the IN() list tip
... much appreciated.

Dave

> Hi Dave,
>
> Dave G wrote:
>> Hello all,
>>
>> I'm trying to do some processing on the front end to optimize my query
>> on
>> the backend.  I would like to generate a list of id's for this query
>> like
>> so:
>>
>> SELECT REPLACE('3,4,5,6,7,8,9',',',' OR element_id=') INTO @tmp;
>>
>> Then use it like:
>>
>> mysql> select @tmp;
>> +---+
>> | @tmp
>>  |
>> +---+
>> | 3 OR element_id=4 OR element_id=5 OR element_id=6 OR element_id=7 OR
>> element_id=8 OR element_id=9 |
>> +---+
>> 1 row in set (0.00 sec)
>>
>> mysql> select * from display__Element where [EMAIL PROTECTED];
>
> MySQL sees that element_id is a number and coerces @tmp into a number.
> Check the
> output of 'show warnings' immediately after this statement.  Another way
> to see what's
> happening is to run 'select 0 + @tmp'.  MySQL is truncating after the
> first non-digit
> characters.
>
> What you want to do is use a prepared statement; prepare the statement as
> concat("select...", @tmp) and then execute the result.
>
>> ++--++
>> | element_id | display_name | data_units |
>> ++--++
>> |  3 | Sync ||
>> ++--++
>> 1 row in set (0.00 sec)
>>
>> mysql> select * from display__Element where element_id=3 OR element_id=4
>> OR element_id=5 OR element_id=6 OR element_id=7 OR element_id=8 OR
>> element_id=9;
>
> A better approach is to use an IN() list, which MySQL might even be able
> to optimize
> better:
>
> WHERE element_id IN(3,4,5,6,7,8,9);
>
> DANGER: do NOT use an IN() subquery, of the form
>
> WHERE element_id IN(select element_id from some_other_table)
>
> MySQL optimizes this kind of query very poorly.
>
> Baron
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Dynamic ORing

2007-07-10 Thread Dave G
Hello all,

I'm trying to do some processing on the front end to optimize my query on
the backend.  I would like to generate a list of id's for this query like
so:

SELECT REPLACE('3,4,5,6,7,8,9',',',' OR element_id=') INTO @tmp;

Then use it like:

mysql> select @tmp;
+---+
| @tmp
 |
+---+
| 3 OR element_id=4 OR element_id=5 OR element_id=6 OR element_id=7 OR
element_id=8 OR element_id=9 |
+---+
1 row in set (0.00 sec)

mysql> select * from display__Element where [EMAIL PROTECTED];
++--++
| element_id | display_name | data_units |
++--++
|  3 | Sync ||
++--++
1 row in set (0.00 sec)

mysql> select * from display__Element where element_id=3 OR element_id=4
OR element_id=5 OR element_id=6 OR element_id=7 OR element_id=8 OR
element_id=9;
++---++
| element_id | display_name  | data_units |
++---++
|  3 | Sync  ||
|  4 | Graph Samples | V  |
|  5 | First E   | V  |
|  7 | Graph Sample  | V  |
|  8 | Test Graph| V  |
++---++
5 rows in set (0.00 sec)

mysql>

The problem is that when I try to use a variable that is a string with
OR's contained, it only uses the first one.  Anybody know what is going on
here?

David Godsey


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Group wise maximum

2007-06-28 Thread Dave G
I just posted a question with a subject of "Revised optimization question"
and did some more searching and found the my problem should be titled
"group-wise maximum".

I need the group-wise maximum of this query based on payload_time:

CREATE PROCEDURE `getElement`(IN id INT UNSIGNED,
  IN ptime DOUBLE,
  IN tid VARCHAR(255),
  IN exact_time TINYINT)
BEGIN
   IF(ptime < 1) THEN
  SELECT MAX(ROUND(payload_time,6)) FROM data__ProcessedDataFrames
WHERE test_id=tid INTO ptime;
   END IF;
   SELECT
  J.product_id,
  P.processed_id,
  MAX(ROUND(P.payload_time,6)) as payload_time,
  P.top_level_product_name,
  SUBSTR(
 BINARY(processed_data),
 FLOOR(J.product_offset/8)+1,
 CEIL(J.product_length/8)) as substring,
 (SELECT HEX(substring)) as raw_data,
 (SELECT toString(
substring,
round(char_length(raw_data)/2,0),
data_type,
(SELECT attribute_value FROM
   config__DataProductAttributes
  WHERE attribute_name='FormatString'
  AND config__DataProductAttributes.product_id=
J.product_id),
product_offset % 8,
(product_length+(product_offset % 8)) % 8,
product_length,
byte_order,
(SELECT attribute_value FROM
   config__DataProductAttributes
  WHERE attribute_name = 'ConvParams'
  AND config__DataProductAttributes.product_id =
J.product_id))) as converted_data,
 (SELECT enum_name FROM
config__DataProductEnumConversions
   WHERE product_id=J.product_id
   AND enum_value =converted_data) as enumerated_data,
 (SELECT metricTest(converted_data,
(SELECT xmlTestMetric FROM
   test__TestMetrics
  WHERE product_id = J.product_id))) as test_metric
  FROM data__ProcessedDataFrames
   P INNER JOIN
  (SELECT E.product_id,
  top_level_product_name,
  product_length,
  product_offset,
  data_type,
  byte_order
FROM display__DataProducts_in_Element
  E INNER JOIN config__DataProducts D ON
E.product_id=D.product_id
  WHERE E.element_id=id) J
   ON P.top_level_product_name=J.top_level_product_name
   WHERE P.test_id=tid
   AND payload_time <= ptime
   GROUP BY J.product_id ;
END;;

mysql> desc data__ProcessedDataFrames;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra
 |
++--+--+-+-++
| processed_id   | int(10) unsigned | NO   | PRI | NULL|
auto_increment |
| top_level_product_name | varchar(255) | YES  | MUL | NULL|  
 |
| test_id| int(10) unsigned | YES  | MUL | NULL|  
 |
| payload_time   | double   | YES  | MUL | NULL|  
 |
| universal_time | double   | YES  | | NULL|  
 |
| processed_data | mediumblob   | YES  | | NULL|  
 |
++--+--+-+-++
6 rows in set (0.00 sec)

mysql>

This table can get quite large so I'm trying not to query on it twice to
get the MAX(payload_time) < ptime.  Obviously what I have is not the
group-wise maximum I was hoping for.

Dave G.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Revised optimization question

2007-06-28 Thread Dave G
I asked a question yesterday that was quite onerous, so I have been
working out some of the details today, and would like to re-post my
question under a different light.

I have a table called data__ProcessedDataFrames that can grow to be
gigantic, and I need to speed up the query for pulling the data out.

What I'm trying to avoid is doing multiple queries on that table because
it is so big.  In my query I have a WHERE statement that looks like:
 WHERE P.test_id=tid
 AND payload_time <= ptime

where P is data__ProcessedDataFrames and tid is the test_id I'm looking at
and ptime is the payload_time that I'm looking at.  The problem with it is
I don't always know the exact payload_time, just that I want the
MAX(payload_time) < ptime.  I can't seem to get it right.  If I put the
MAX aggregate on payload_time, it returns the max payload_time but not the
data associated with that max payload_time.  So essentially I'm trying to
force my query to return the row that is associated with this max
payload_time without doing another query on the table to get the exact
payload_time.  Here is the table structure for data__ProcessedDataFrames:

mysql> desc data__ProcessedDataFrames;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra
 |
++--+--+-+-++
| processed_id   | int(10) unsigned | NO   | PRI | NULL|
auto_increment |
| top_level_product_name | varchar(255) | YES  | MUL | NULL|  
 |
| test_id| int(10) unsigned | YES  | MUL | NULL|  
 |
| payload_time   | double   | YES  | MUL | NULL|  
 |
| universal_time | double   | YES  | | NULL|  
 |
| processed_data | mediumblob   | YES  | | NULL|  
 |
++--+--+-+-++
6 rows in set (0.00 sec)

mysql>


Here is the SP:
CREATE PROCEDURE `getElement`(IN id INT UNSIGNED,
  IN ptime DOUBLE,
  IN tid VARCHAR(255),
  IN exact_time TINYINT)
BEGIN
   IF(ptime < 1) THEN
  SELECT MAX(ROUND(payload_time,6)) FROM data__ProcessedDataFrames
WHERE test_id=tid INTO ptime;
   END IF;
   SELECT
  J.product_id,
  P.processed_id,
  MAX(ROUND(P.payload_time,6)) as payload_time,
  P.top_level_product_name,
  SUBSTR(
 BINARY(processed_data),
 FLOOR(J.product_offset/8)+1,
 CEIL(J.product_length/8)) as substring,
 (SELECT HEX(substring)) as raw_data,
 (SELECT toString(
substring,
round(char_length(raw_data)/2,0),
data_type,
(SELECT attribute_value FROM
   config__DataProductAttributes
  WHERE attribute_name='FormatString'
  AND config__DataProductAttributes.product_id=
J.product_id),
product_offset % 8,
(product_length+(product_offset % 8)) % 8,
product_length,
byte_order,
(SELECT attribute_value FROM
   config__DataProductAttributes
  WHERE attribute_name = 'ConvParams'
  AND config__DataProductAttributes.product_id =
J.product_id))) as converted_data,
 (SELECT enum_name FROM
config__DataProductEnumConversions
   WHERE product_id=J.product_id
   AND enum_value =converted_data) as enumerated_data,
 (SELECT metricTest(converted_data,
(SELECT xmlTestMetric FROM
   test__TestMetrics
  WHERE product_id = J.product_id))) as test_metric
  FROM data__ProcessedDataFrames
   P INNER JOIN
  (SELECT E.product_id,
  top_level_product_name,
  product_length,
  product_offset,
  data_type,
  byte_order
FROM display__DataProducts_in_Element
  E INNER JOIN config__DataProducts D ON
E.product_id=D.product_id
  WHERE E.element_id=id) J
   ON P.top_level_product_name=J.top_level_product_name
   WHERE P.test_id=tid
   AND payload_time <= ptime
   GROUP BY J.product_id ;
END;;


You can see here at the end I did a GROUP BY on J.product_id (there can be
multible product_ids) and one of the return values is:
MAX(ROUND(P.payload_time,6)) as payload_time,

Well this max does not force the processed_data from that row to be
returned, I get processed_data from another row because of the GROUP BY

Any suggestions?

David Godsey


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To uns

Re: optimization help

2007-06-27 Thread Dave G
I do, but I don't see any way around that with the data I have.

Dave G.

> Good Afternoon David
>
> sounds as if you have a number of non-unique indices (even possibly FTS!)
> slowing down queries..this should help you concentrate on the slower
> indices
> mysql>
> select TABLE_NAME,COLUMN_NAME,INDEX_NAME from
> INFORMATION_SCHEMA.STATISTICS
> where NON_UNIQUE=1;
>
> Anyone else?
> Martin--
> This email message and any files transmitted with it contain confidential
> information intended only for the person(s) to whom this email message is
> addressed.  If you have received this email message in error, please
> notify
> the sender immediately by telephone or email and destroy the original
> message without making a copy.  Thank you.
>
> - Original Message -
> From: "Dave G" <[EMAIL PROTECTED]>
> To: 
> Sent: Wednesday, June 27, 2007 11:32 AM
> Subject: optimization help
>
>
>>I have a table in my database (currently) that grows to be huge (and I
>> need to keep the data).  I'm in a redesign phase and I'm trying to do it
>> right.  So here are the relevant details:
>>
>> The table has several keys involved:
>>
>> mysql> desc data__ProcessedDataFrames;
>> ++--+--+-+-++
>> | Field  | Type | Null | Key | Default |
>> Extra
>> |
>> ++--+--+-+-++
>> | processed_id   | int(10) unsigned | NO   | PRI | NULL|
>> auto_increment |
>> | top_level_product_name | varchar(255) | YES  | MUL | NULL|
>> |
>> | test_id| int(10) unsigned | YES  | MUL | NULL|
>> |
>> | p_time | double   | YES  | MUL | NULL|
>> |
>> | processed_data | mediumblob   | YES  | | NULL|
>> |
>> ++--+--+-+-++
>> 6 rows in set (0.00 sec)
>>
>> This is the table that contains the data I'm interested in currently.
>> Queries on this table when it gets large is slow as molasses.  I'm
>> thinking about making a new table for anything with a different test_id
>>  any opinions as to whether this is good or bad?
>>
>> Before you make fun of me for my questions, I a bit new to database
>> programming.
>>
>> If it is better design to break it into smaller tables (for speed
>> anyway)
>> then I would need to know how to query over multiple tables as though it
>> was one table.  Join will do this, but that takes forever (unless of
>> course I may be doing this wrong), so that's not a good option.  I need
>> to
>> be able to query over mutiple test_ids, which will be multiple tables,
>> for
>> specific top_level_product_name, with in some time range (using p_time).
>>
>> Any help would be appreciated.  I will happily give more information if
>> you need to offer an educated opinion.
>>
>> Thanks
>>
>> David Godsey
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:
>> http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: optimization help

2007-06-27 Thread Dave G
I think I can do that:

I don't have any other indexes, just the keys.

mysql> show create table data__ProcessedDataFrames;
+---+--+
| Table | Create Table   |
+---+--+
| data__ProcessedDataFrames | CREATE TABLE `data__ProcessedDataFrames` (
  `processed_id` int(10) unsigned NOT NULL auto_increment,
  `top_level_product_name` varchar(255) default NULL,
  `test_id` int(10) unsigned default NULL,
  `payload_time` double default NULL,
  `universal_time` double default NULL,
  `processed_data` mediumblob,
  PRIMARY KEY  (`processed_id`),
  KEY `test_id` (`test_id`),
  KEY `payload_time` (`payload_time`),
  KEY `top_level_product_name` (`top_level_product_name`)
) ENGINE=MyISAM AUTO_INCREMENT=1392568 DEFAULT CHARSET=latin1 |
+---+--+
1 row in set (0.00 sec)

mysql>

As for the amount of data I expect to get  infinite really.

Our setup: we have serveral boxes we are running tests from, where sql is
the storage engine on each local box, then we will have a main storage
area for all relevant tests.  Based on passed data, the the tables size
will be pushing 4G as it is (when we port the data)  and expect at least
that much more over the life of this software, but since the payloads I
will be getting the data from have not been developed yet, I can't be
entirely sure.  One of the reasons I was inquiring as to whether breaking
it up into several tables would be a good idea is because it would make it
easier for me to merge the data from the different testers into the main
data repository that way.  Otherwise I will have to figure out a good way
of redoing the test_id in each test that is stored in the main repository.

Slow queries will be a little hard to show without giving a full
evaluation of my system.  So I'll simplify it a little.  I'm doing several
joins to get the right parameters to query this table in a stored
procedure  . but when it comes down to it, the query on this table is
the big one and I can modify my other joins, just making the query on this
table fast is my concern.  Example query:

select payload_time,HEX(processed_data) from data__ProcessedDataFrames
where test_id=18 AND top_level_product_name="DataProduct" AND payload_time
> 11808.74704 AND payload_time < 1180564096.24967;

What I'm concerned about is with how much data I will eventually have,
even scanning over the KEYS will take a long time.

Thanks

Dave G.

BTW: heres the giant query that I use.
   SELECT
  E.product_id,
  product_name,
  D.top_level_product_name,
  processed_id,
  product_offset,
  product_length,
  version_id,
byte_order,
  ROUND(R.payload_time,6) as payload_time,
  SUBSTR(
 BINARY(processed_data),
 FLOOR(product_offset/8)+1,
 CEIL(product_length/8)) as substring,
(SELECT HEX(substring)) as raw_data,
  (SELECT toString (
substring,
round(char_length(raw_data)/2,0),
 data_type,
 (SELECT attribute_value FROM
config__DataProductAttributes WHERE attribute_name =
'FormatString' AND
   config__DataProductAttributes.product_id =
  E.product_id),
 product_offset % 8,
 (product_le

optimization help

2007-06-27 Thread Dave G
I have a table in my database (currently) that grows to be huge (and I
need to keep the data).  I'm in a redesign phase and I'm trying to do it
right.  So here are the relevant details:

The table has several keys involved:

mysql> desc data__ProcessedDataFrames;
++--+--+-+-++
| Field  | Type | Null | Key | Default | Extra
 |
++--+--+-+-++
| processed_id   | int(10) unsigned | NO   | PRI | NULL|
auto_increment |
| top_level_product_name | varchar(255) | YES  | MUL | NULL|  
 |
| test_id| int(10) unsigned | YES  | MUL | NULL|  
 |
| p_time | double   | YES  | MUL | NULL|  
 |
| processed_data | mediumblob   | YES  | | NULL|  
 |
++--+--+-+-++
6 rows in set (0.00 sec)

This is the table that contains the data I'm interested in currently. 
Queries on this table when it gets large is slow as molasses.  I'm
thinking about making a new table for anything with a different test_id
 any opinions as to whether this is good or bad?

Before you make fun of me for my questions, I a bit new to database
programming.

If it is better design to break it into smaller tables (for speed anyway)
then I would need to know how to query over multiple tables as though it
was one table.  Join will do this, but that takes forever (unless of
course I may be doing this wrong), so that's not a good option.  I need to
be able to query over mutiple test_ids, which will be multiple tables, for
specific top_level_product_name, with in some time range (using p_time).

Any help would be appreciated.  I will happily give more information if
you need to offer an educated opinion.

Thanks

David Godsey


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: jdbc with a UDF

2007-06-05 Thread Dave G
Thanks for the reply.  I tracked the problem down.  Simple problem really,
I was using an old mysql-connector-java jar.  I updated that to the
correct version for my database and it all worked.

What version of of the JDBC driver are you using?
  Good question to ask.  I asked myself that very question shortly after
emailing the list.

Thanks

David Godsey

> Perhaps you could show us the Java code you are using to invoke this
> UDF?  What are you expecting to get back form it? a result set? a
> return code from a function? What version of of the JDBC driver are
> you using?
>
>  - michael
>
>
> On 6/5/07, Dave G <[EMAIL PROTECTED]> wrote:
>> I'm getting what appears to be a java reference back from values I
>> return
>> from a UDF.  The values appear find using php, and in the mysql client.
>> The result looks something like:
>>
>> [EMAIL PROTECTED]
>>
>> The result should be a string (some representing floats, others as
>> integers).
>>
>> Any help will be appreciated.  I searched on google and only found where
>> another person had the same problem, but no solution.
>>
>> Dave G.
>>
>>
>>
>>
>> --
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>
>
> --
>  - michael dykman
>  - [EMAIL PROTECTED]
>
>  - All models are wrong.  Some models are useful.
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



jdbc with a UDF

2007-06-05 Thread Dave G
I'm getting what appears to be a java reference back from values I return
from a UDF.  The values appear find using php, and in the mysql client. 
The result looks something like:

[EMAIL PROTECTED]

The result should be a string (some representing floats, others as integers).

Any help will be appreciated.  I searched on google and only found where
another person had the same problem, but no solution.

Dave G.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql pushing data to client

2006-09-28 Thread Dave G
Thanks for pointing that out, I'll go with James solution.

David Godsey
> David Godsey wrote:
>  > I am looking for a way to write a client program that will wake up when
>  > there is new data in the database, much like replication.
>  >
>  > So instead of my client pulling the database on some fixed interval, I
>  > would like the mysql daemon to push the data to my client when there is
>  > new data.  I assume this is possible given that it is done with
>  > replication.  Does anybody have information on how to implement a
> client
>  > program like this?
>  >
>  > I'm using PHP, but I'm not opposed to using C or C++ for this kind of
>  > functionality.
>
> James Neff wrote:
>  > I'm doing something similar now and I just have my clients (a java app)
>  > periodically check a small table that indicates theres data ready for
>  > them.  It's a simple "SELECT count(*) ..." that is fairly inexpensive
>  > and the does this once every 5 seconds.
>  >
>  > I don't know if there is a built-in solution for mysql but this gets
> the
>  > job done for me.
>  >
>  > Let me know if you find a better way to do this.
>
> David Godsey wrote:
>> That is a good work around, I appreciate the idea.  The optimal would be
>> a
>> client that listens on the mysql thread the pushes data to the database.
>> But if there is no way to "tap into that thread", the "select count(*)"
>> would do.
>>
>> There must be a way to do it though, since it is done with replication.
>
> Except that it isn't.  The Master writes to its local binlog.  The slave
> pulls
> updates from the master periodically.  See the manual for all the details
> .
>
> You could write a daemon to run on the mysql server, which would watch for
> local
> changes and push them to your client, but that sounds overly complex to me
> compared to James' solution.
>
> Michael
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UTF-8 With MySQL 4.0.18 and PHP 2.5.5

2004-06-21 Thread Dave G
MySQL List,
At my request, my virtual hosting service has upgraded to MySQL
4.0.18 so that I can have better UTF-8 character support. Full Unicode
support is important to me as I do bilingual Japanese/English sites. My
understanding is that in 4.0.18 I can specify that a TEXT field is
specifically in UTF-8, or other encodings if I desire.
However, usually I use phpMyAdmin to manage my database, and in
the version that I'm running - 2.5.5, it doesn't seem to have any
special selectors or anything to indicate the character encoding of any
one field or column.
My question is, do I have to step outside of phpMyAdmin to
ensure that the correct encoding is being used? If I do so, will that
affect future database management from within phpMyAdmin?

Any advice would be much appreciated.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Integrating phpBB with an existing MySQL database

2004-05-03 Thread Dave G
Jigal,

> I know that phpBB has a few converter scripts

I think the converter scripts are for a different issue. I don't have a
previous BBS system that I want to convert from. All I have is user
profile information stored in a simple MySQL database that I built
myself. The users are subscribed to a newsletter and also have some
biographical information that appears on the web site. All I want to do
is ensure that when they register for the newsletter and become a member
of the site that they are automatically included in the site's phpBB
forum.

> I think you will manage ;-)

Thanks for the encouragement. Since after a day or so of my posting
going out, no one has posted any "Make sure you definitely don't do X"
warnings, I'm going to go ahead and play with it.

Thanks to Joe for the password information and also especially de_RiN
for offering more specific help offlist.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Integrating phpBB with an existing MySQL database

2004-05-03 Thread Dave G
MySQL Listers,
I already asked this question on phpBB's own forums, but no one
seems inclined to answer me. I'm hoping there might be more charitable
souls here on this list.
My question is that I want use phpBB because I am satisfied with
it's overall functionality and usability. However, it sets up it's own
series of tables in one's database which are a little complicated. I'm
sure they are sensibly laid out, but I am a relative newcomer to MySQL,
so their usage is opaque to me.
What I want to do is integrate the user accounts within phpBB
with the user data that I have already collected so far. I have a site
that's been up for a few years and it has user accounts where they store
contact and profile information. I'd like those same users to be able to
log into phpBB with the same username and password that they have always
been using.
I thought one thing I could do is try and find the table that
stores phpBB's usernames and passwords, and extend those to include the
user data that I have already. But I know enough to know that it is
unlikely to be as simple as that. For example, I thought I read
somewhere that phpBB uses a different hashing algorithm to encrypt it's
passwords.
Has anyone here been through a process like this before? Can
anyone offer some tips and warnings about attempting this?

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Strange error on one table using phpMyAdmin

2004-03-30 Thread Dave G
MySQL users,
I have one table in a database which will not allow me to edit
it in phpMyAdmin. Every time I try to select the row for editing, I get
an error which says "tbl_row_delete.php: Missing sql_query".
I have 42 other tables in this database, all of which work fine.
I have attemped to rename, move, and copy the database. I also tried
functions in the "Operations" tab which I don't understand very well
called "Check table", "Repair table" and "Analyze table". All of which
simply reported that my table was "OK".
I hope to avoid having to rewrite this entire table, but I am a
loss as to how I can get access to it. It seems the only operation it
will allow me is to either delete rows or drop the whole table. I am a
MySQL beginner and I am most comfortable using it within phpMyAdmin, but
I will try and follow any command line instructions anyone can offer
that might help.

Thank you for your help.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: 4.1 Stable Enough For Beginner To Use? [SOLVED]

2004-03-01 Thread Dave G

Thanks to all who responded. I think I will attempt to rn MySQL
4.1 for my web sites.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.1 Stable Enough For Beginner To Use?

2004-02-29 Thread Dave G
MySQL Listers,
The ability to have utf-8 encoded data in my database is
important enough for me that I am considering upgrading to 4.1. I am a
beginner to MySQL, having only used it for not quite a year. My question
is whether or not MySQL 4.1 is stable enough for a beginner to use. On
the one hand, if things went wrong I would be unlikely to tell the
difference between when version 4.1 had a bug and when I was making a
mistake. On the other hand, I store only a few dozen to a few hundred
records in any one database, my sites are only designed for at most a
few thousand users, and I don't know enough to do anything more than
very simple queries.
So would you think that it's safe for me to use it, or would you
recommend I hold off from it?

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Storing utf-8 on MySQL 3.23.58, and preparing for MySQL 4.1

2004-02-28 Thread Dave G
MySQL Listers,
Currently I am running MySQL 3.23, which I know is not really
designed to handle utf-8. I am eagerly anticipating full utf-8 support
in version 4.1.
In the meantime, I am curious to know if there are any
particular considerations I should take into account when storing utf-8
data in my current database which will make the transition to MySQL 4.1
as smooth as possible. I am new to databasing, and I have never upgraded
a database before. In any case, my sites are hosted on a virtual server,
where I have little control over the database settings.
By reading the archives of this list and looking around on the
net, I'm led to believe that it is possible to store utf-8 strings in
TEXT columns, and that the only sacrifice is that sorting won't really
work. Can I also store utf-8 strings in CHAR and VARCHAR columns?
Up until now, I've been storing text as binary data. But in 4.1
it will be possible to designate different columns as different encoding
sets. Will there be problems in going from a BLOB column to a utf-8 TEXT
column? Will it be any better for me to store my current utf-8 data as
TEXT columns and then in 4.1 assign encoding? Will that be possible?
Any advice would be greatly appreciated.
Thank you.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Storing mysql dates as an integer [SOLVED]

2004-01-08 Thread Dave G
Fred,

> An unsigned int is probably more appropriate for timestamps than a
> signed int.

Thank you, that was the kind of advice I was looking for. And your point
about not using UNIX time stamps for birthdays and things that could
well be outside their date range is well taken. I'm only using it for
events and people signing up for those events, so all the dates should
be clustered around the present time.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Storing mysql dates as an integer

2004-01-07 Thread Dave G
Paul,

> >(Unix timestamps are ten digits, aren't they?).  
> No, they're fourteen digits: CCYYMMDDhhmmss

I think we might be talking about different things. A UNIX time stamp is
ten digits long:
http://www.unixtimestamp.com/
How MySQL stores it's own TIMESTAMP column is the format you describe.

I want to know if there's any problem storing a UNIX timestamp purely as
an integer in my MySQL database.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Storing mysql dates as an integer

2004-01-07 Thread Dave G
MySQL Gurus,
Because of time zone differences between my web site's intended
audience (Japan) and my server's location (Nevada), I've found that it's
most efficient, or at least most simple, to do all my time calculations
in PHP.
Up to now, I've still been storing all my dates in native MySQL
time formats - like DATETIME. However, this requires a bit of format
conversion between PHP and MySQL.
What I'm thinking now is that it might just be easier for me to
store the date as a UNIX timestamp format in MySQL as a simple ten digit
long integer (Unix timestamps are ten digits, aren't they?). That way I
can pass them to and from PHP and do all the work on the PHP side
without having to do any format conversions in my SELECT and INSERT
queries.
Is this a horribly bad idea for any reason? Is the INTEGER
format the best suited for this purpose? Any tips would be greatly
appreciated.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Newbie Question - can I collapse these two queries into one line? [SOLVED]

2003-12-30 Thread Dave G
> One extra join will do it. You're looking for all improvws in 
> the next seven days that aren't cancelled, right?

Yes... thank you. I see where I was going wrong now. The extra joins I
needed were to match the wsid to the member.id by matching both to where
they appear in the attend table. Previously I was trying to match the
member.id directly to the date, which was confusing me.

> Also, are you sure you don't want >= and <= ?

Um... well, I'm to new to be sure about anything, but I'll try it that
way and see if that gets the results I'm expecting.

> That's because you were being inefficient. :)

Thanks for taking the time to respond! My script is working now!

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Newbie Question - can I collapse these two queries into one line?

2003-12-29 Thread Dave G
MySQL Gurus,
Hello, I am new to this list. I have looked through the archives
and online, but the particulars of my situation still leave me perplexed
as to how to solve this. It may be more of a lack of ability to
logically sort my query than it is an issue of syntax.
I have three tables. One records members and information about
them, such as their email addresses. Another table keeps track of
classes that the members attend, and in that table is the date of the
classes. The third table records when a member is attending a class.
What I want to be able to do is select all the email addresses
of all the students who attend a class on a particular date. If I do it
as a two step process in PHP, it works:

$wsQuery = "SELECT improvwsid FROM  improvws WHERE wsdate > '" . $today
. "' AND wsdate < '" . $sevenDays . "' AND cancelled = 0";
$wsResult = mysql_query($wsQuery);
$wsid = mysql_result($wsResult, 0, "improvwsid");
$emailQuery = "SELECT members.email AS email, members.firstname AS
firstname, members.lastname AS lastname FROM members, improvwsattend
WHERE members.id = attend.attendeeid AND attend.improvwsid = " . $wsid;
$emailResult = mysql_query($emailQuery);

While this is getting the job done, it doesn't sit right because
it feels like I'm being inefficient. There must be a way to get the
results I want in one query. But not only can I not figure out how to do
that, I'm at a loss as to how to describe where my thinking is going
wrong.
Any help would be much appreciated.

-- 
Yoroshiku!
Dave G
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]