is changing my.cnf without restart safe?

2010-11-08 Thread AHMET ARSLAN
Hello MySQL Community,

Last Friday I changed /etc/mysql/my.cnf file (at server) accidentally. 
I set the variable innodb_data_file_path to  ibdata1:100M

Then I realized that I changed the server copy. Then to get the original value 
I issued the query :
SHOW VARIABLES LIKE  'innodb_data_file_path'

I get the following:
innodb_data_file_path = 
ibdata1:4000M;ibdata2:4000M;ibdata3:4000M;ibdata4:4000M:autoextend

I wrote this value to my.cnf again. 

MySQL isn't restarted in these whole process. Whole thing took 5-10 minutes.

Here is my questions:

if i change something in my.cnf, they are not activated until i restart mysql 
right?

is above scenario safe? Do you think i messed up something?

Thank you for your help.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: is changing my.cnf without restart safe?

2010-11-08 Thread Johan De Meersman
No, this is in and of itself safe. I didn't realise you could change the
InnoDB datafiles on the fly, though - thanks for that hint :-)

MySQL will never write the config file itself, so you're not at risk of
conflict there. You are at risk of putting something in the configfile which
messes up your MySQL server at the next restart, however; but that's pretty
much the case for any other daemon, too.



On Mon, Nov 8, 2010 at 11:06 AM, AHMET ARSLAN aarsl...@anadolu.edu.trwrote:

 Hello MySQL Community,

 Last Friday I changed /etc/mysql/my.cnf file (at server) accidentally.
 I set the variable innodb_data_file_path to  ibdata1:100M

 Then I realized that I changed the server copy. Then to get the original
 value I issued the query :
 SHOW VARIABLES LIKE  'innodb_data_file_path'

 I get the following:
 innodb_data_file_path =
 ibdata1:4000M;ibdata2:4000M;ibdata3:4000M;ibdata4:4000M:autoextend

 I wrote this value to my.cnf again.

 MySQL isn't restarted in these whole process. Whole thing took 5-10
 minutes.

 Here is my questions:

 if i change something in my.cnf, they are not activated until i restart
 mysql right?

 is above scenario safe? Do you think i messed up something?

 Thank you for your help.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Running Queries When INSERTing Data?

2010-11-08 Thread Hal Vaughan
I'm redesigning some software that's been in use since 2002.  I'll be working 
with databases that will start small and grow along the way.  

In the old format, data would come to us in mega-big text files that had to be 
parsed and manipulated and so on with Perl to remove crap and finally produce 
one tab delimited file.  Once that file was created, another simple program 
would go through and use the data in each line for an INSERT statement that 
would put the data in a table.  This table also has an Idx field that is an 
auto-incrementing primary key for the table.

Each night at 3:30 am, a program would run and would go through the same 
process for each client.  I never timed it, but it could take something like 
30-60 seconds per client, but timing wasn't a major issue, since it had a LONG 
time from then until new data would be inserted into the DB.  The SELECT 
statements to pull the data for each client involve a number of AND and OR 
conditions.  The first one of these would create a temporary table with its 
results, then another long SELECT statement would create a 2nd temporary table 
by filtering the data out more.  This would continue for a few temporary tables 
until the data was filtered.  Then it would be packaged up and encrypted, then 
sent out to the client, who has a program on his computer to read that data and 
print it out if desired.

This has worked, but for a number of reasons, a once-a-day data pull and send 
won't work as well with the new design.  The program on the clients' computers 
will be able to access a special server just for them directly.  (I know the 
concept of a server for each human client sounds inefficient, but it actually 
improves operations in a number of ways.)  So each server will only have to 
provide data for one client.

The big difference is that I'd like to make it so they can access the data 
live, or almost live.  I don't mean all the data, but the subset that meets 
their needs.  In other words, the equivalent of what was sent to them daily in 
the old system.  Their individual servers will still get the big tab-delimited 
file that will still be INSERTed in to their DB line by line.  But I'd like to 
be able to select from the new data as it comes in, once it's been given a new 
number in the Idx field.

Is there any way to run a row of data through SELECT queries as it is being 
INSERTed into a table -- or just after?

The reason for doing this, instead of INSERTing all the data, then running a 
program is that as the database grows, pulling out the data will take longer 
and longer, so if there were a way to screen data as it comes in, that would 
make it easier to provide instantly available data.

I also know my knowledge of MySQL is quite limited, so if this can be done in 
better ways, I'd be interested in hearing about them.

Thank you!



Hal
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
I'm not sure I understand exactly what you mean, but I think you just need to 
keep a timestamp associated with each row as it is inserted, put an index on 
it, then you can select new data just by using the appropriate time range.

Also, if you're parsing files into tab delimited format, you don't need to 
write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
INFILE which takes delimited text files and inserts data in bulk, which is much 
faster than inserting line by line.

Regards,
Gavin Towey

-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Running Queries When INSERTing Data?

I'm redesigning some software that's been in use since 2002.  I'll be working 
with databases that will start small and grow along the way.

In the old format, data would come to us in mega-big text files that had to be 
parsed and manipulated and so on with Perl to remove crap and finally produce 
one tab delimited file.  Once that file was created, another simple program 
would go through and use the data in each line for an INSERT statement that 
would put the data in a table.  This table also has an Idx field that is an 
auto-incrementing primary key for the table.

Each night at 3:30 am, a program would run and would go through the same 
process for each client.  I never timed it, but it could take something like 
30-60 seconds per client, but timing wasn't a major issue, since it had a LONG 
time from then until new data would be inserted into the DB.  The SELECT 
statements to pull the data for each client involve a number of AND and OR 
conditions.  The first one of these would create a temporary table with its 
results, then another long SELECT statement would create a 2nd temporary table 
by filtering the data out more.  This would continue for a few temporary tables 
until the data was filtered.  Then it would be packaged up and encrypted, then 
sent out to the client, who has a program on his computer to read that data and 
print it out if desired.

This has worked, but for a number of reasons, a once-a-day data pull and send 
won't work as well with the new design.  The program on the clients' computers 
will be able to access a special server just for them directly.  (I know the 
concept of a server for each human client sounds inefficient, but it actually 
improves operations in a number of ways.)  So each server will only have to 
provide data for one client.

The big difference is that I'd like to make it so they can access the data 
live, or almost live.  I don't mean all the data, but the subset that meets 
their needs.  In other words, the equivalent of what was sent to them daily in 
the old system.  Their individual servers will still get the big tab-delimited 
file that will still be INSERTed in to their DB line by line.  But I'd like to 
be able to select from the new data as it comes in, once it's been given a new 
number in the Idx field.

Is there any way to run a row of data through SELECT queries as it is being 
INSERTed into a table -- or just after?

The reason for doing this, instead of INSERTing all the data, then running a 
program is that as the database grows, pulling out the data will take longer 
and longer, so if there were a way to screen data as it comes in, that would 
make it easier to provide instantly available data.

I also know my knowledge of MySQL is quite limited, so if this can be done in 
better ways, I'd be interested in hearing about them.

Thank you!



Hal
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humboldt Court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Running Queries When INSERTing Data?

2010-11-08 Thread Hal Vaughan

On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need to 
 keep a timestamp associated with each row as it is inserted, put an index on 
 it, then you can select new data just by using the appropriate time range.

But won't that take just as long as any other queries?  Or will it be speeded 
up because all the matching records would be adjacent to each other -- like all 
at the end?

 Also, if you're parsing files into tab delimited format, you don't need to 
 write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
 INFILE which takes delimited text files and inserts data in bulk, which is 
 much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from about 
2005-2006 until now the system was running smoothly and that's a long gap with 
no programming work!


Hal

 
 Regards,
 Gavin Towey
 
 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?
 
 I'm redesigning some software that's been in use since 2002.  I'll be working 
 with databases that will start small and grow along the way.
 
 In the old format, data would come to us in mega-big text files that had to 
 be parsed and manipulated and so on with Perl to remove crap and finally 
 produce one tab delimited file.  Once that file was created, another simple 
 program would go through and use the data in each line for an INSERT 
 statement that would put the data in a table.  This table also has an Idx 
 field that is an auto-incrementing primary key for the table.
 
 Each night at 3:30 am, a program would run and would go through the same 
 process for each client.  I never timed it, but it could take something like 
 30-60 seconds per client, but timing wasn't a major issue, since it had a 
 LONG time from then until new data would be inserted into the DB.  The SELECT 
 statements to pull the data for each client involve a number of AND and OR 
 conditions.  The first one of these would create a temporary table with its 
 results, then another long SELECT statement would create a 2nd temporary 
 table by filtering the data out more.  This would continue for a few 
 temporary tables until the data was filtered.  Then it would be packaged up 
 and encrypted, then sent out to the client, who has a program on his computer 
 to read that data and print it out if desired.
 
 This has worked, but for a number of reasons, a once-a-day data pull and send 
 won't work as well with the new design.  The program on the clients' 
 computers will be able to access a special server just for them directly.  (I 
 know the concept of a server for each human client sounds inefficient, but it 
 actually improves operations in a number of ways.)  So each server will only 
 have to provide data for one client.
 
 The big difference is that I'd like to make it so they can access the data 
 live, or almost live.  I don't mean all the data, but the subset that meets 
 their needs.  In other words, the equivalent of what was sent to them daily 
 in the old system.  Their individual servers will still get the big 
 tab-delimited file that will still be INSERTed in to their DB line by line.  
 But I'd like to be able to select from the new data as it comes in, once it's 
 been given a new number in the Idx field.
 
 Is there any way to run a row of data through SELECT queries as it is being 
 INSERTed into a table -- or just after?
 
 The reason for doing this, instead of INSERTing all the data, then running a 
 program is that as the database grows, pulling out the data will take longer 
 and longer, so if there were a way to screen data as it comes in, that would 
 make it easier to provide instantly available data.
 
 I also know my knowledge of MySQL is quite limited, so if this can be done in 
 better ways, I'd be interested in hearing about them.
 
 Thank you!
 
 
 
 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
 
 
 This message contains confidential information and is intended only for the 
 individual named.  If you are not the named addressee, you are notified that 
 reviewing, disseminating, disclosing, copying or distributing this e-mail is 
 strictly prohibited.  Please notify the sender immediately by e-mail if you 
 have received this e-mail by mistake and delete this e-mail from your system. 
 E-mail transmission cannot be guaranteed to be secure or error-free as 
 information could be intercepted, corrupted, lost, destroyed, arrive late or 
 incomplete, or contain viruses. The sender therefore does not accept 
 liability for any loss or damage caused by viruses or errors or omissions in 
 the contents of this message, which arise as a result of e-mail transmission. 
 

numeric comparisons

2010-11-08 Thread Larry Martell
I have a client that asked me to look into a situation where they were
seeing different behavior with the same data and same sql on 2
different servers.

The have some sql that was comparing a double to a varchar in a where
clause - something like this:

where (doubleCol  varcharCol and some other conditions) or
   (doubleCol  varcharCol and some other conditions)

Let's take an example where the varcharCol has a string in like
'4.5000' and the doubleCol has 4.5.

On their 5.0.45 server this was working as 'expected' (i.e. the
comparisons in the where clause were false since they are numerically
equal). But on their 5.0.18 server the  clause was resolving to true.
 I changed the query to cast both columns to binary, so the query now
reads:

where (cast(doubleCol as binary)  cast(varcharCol as binary) and some
other conditions) or
   (cast(doubleCol as binary)  cast(varcharCol as binary) and
some other conditions)

And now on both servers the  clause of the query is resolving to true.

I know I can strip off the trailing zeros from the varchar, but there
must be a generic way to cast these so they compare properly as
numbers.

TIA
-larry

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: numeric comparisons

2010-11-08 Thread Peter Brawley

I know I can strip off the trailing zeros from the varchar, but there
must be a generic way to cast these so they compare properly as
numbers.


Cast() may happen to fix some rounding errors, but the only way to be 
sure of getting rid of such rounding errors in float or double values is 
to change the /storage type/ to decimal.


PB



On 11/8/2010 12:50 PM, Larry Martell wrote:

I have a client that asked me to look into a situation where they were
seeing different behavior with the same data and same sql on 2
different servers.

The have some sql that was comparing a double to a varchar in a where
clause - something like this:

where (doubleCol  varcharCol and some other conditions) or
(doubleCol  varcharCol and some other conditions)

Let's take an example where the varcharCol has a string in like
'4.5000' and the doubleCol has 4.5.

On their 5.0.45 server this was working as 'expected' (i.e. the
comparisons in the where clause were false since they are numerically
equal). But on their 5.0.18 server the  clause was resolving to true.
  I changed the query to cast both columns to binary, so the query now
reads:

where (cast(doubleCol as binary)  cast(varcharCol as binary) and some
other conditions) or
(cast(doubleCol as binary)  cast(varcharCol as binary) and
some other conditions)

And now on both servers the  clause of the query is resolving to true.

I know I can strip off the trailing zeros from the varchar, but there
must be a generic way to cast these so they compare properly as
numbers.

TIA
-larry



Re: numeric comparisons

2010-11-08 Thread Larry Martell
On Mon, Nov 8, 2010 at 1:01 PM, Michael Satterwhite mich...@weblore.com wrote:
 While I don't know why the behavior changed, the comparison of floating point
 values has been an issue since the day computers were created. In the last 10
 - 15 years, the comparisons have worked better, but it's still an issue. The
 problem is that most decimal fractions do not convert well to binary.
 Actually, I'd expect the example of 4.5 to work fine as 0.5 does convert well
 to a binary floating point - but that is academic.

 In the 1970's I got used to writing floating point comparisons as

        x = 4.3
        if(abs(x - 4.3) .lt. some tolerance)

 The tolerance is how close the values needed to be for me to consider them
 equal. An example might be

        if(abs(x - 4.3) .lt. 0.1)

 Someone else might be able to shed some light on why the behavior changed.

 On Monday, November 08, 2010 12:50:16 pm Larry Martell wrote:
 I have a client that asked me to look into a situation where they were
 seeing different behavior with the same data and same sql on 2
 different servers.

 The have some sql that was comparing a double to a varchar in a where
 clause - something like this:

 where (doubleCol  varcharCol and some other conditions) or
            (doubleCol  varcharCol and some other conditions)

 Let's take an example where the varcharCol has a string in like
 '4.5000' and the doubleCol has 4.5.

 On their 5.0.45 server this was working as 'expected' (i.e. the
 comparisons in the where clause were false since they are numerically
 equal). But on their 5.0.18 server the  clause was resolving to true.
  I changed the query to cast both columns to binary, so the query now
 reads:

 where (cast(doubleCol as binary)  cast(varcharCol as binary) and some
 other conditions) or
            (cast(doubleCol as binary)  cast(varcharCol as binary) and
 some other conditions)

 And now on both servers the  clause of the query is resolving to true.

 I know I can strip off the trailing zeros from the varchar, but there
 must be a generic way to cast these so they compare properly as
 numbers.

 TIA
 -larry


Prompted by a reply I got off-list, I changed the cast from casting to
binary to casting to decimal(10,5) and it seems to work properly on
both servers.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread BMBasal
 But won't that take just as long as any other queries?  Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?

You can order the result data set by timestamp in descending order, so the
latest will come up first, i.e., LIFO.

-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com] 
Sent: Monday, November 08, 2010 1:39 PM
To: mysql@lists.mysql.com
Subject: Re: Running Queries When INSERTing Data?


On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need
to keep a timestamp associated with each row as it is inserted, put an index
on it, then you can select new data just by using the appropriate time
range.

But won't that take just as long as any other queries?  Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?

 Also, if you're parsing files into tab delimited format, you don't need to
write a separate parser to insert rows line by line.  MySQL has LOAD DATA
INFILE which takes delimited text files and inserts data in bulk, which is
much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from
about 2005-2006 until now the system was running smoothly and that's a long
gap with no programming work!


Hal

 
 Regards,
 Gavin Towey
 
 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?
 
 I'm redesigning some software that's been in use since 2002.  I'll be
working with databases that will start small and grow along the way.
 
 In the old format, data would come to us in mega-big text files that had
to be parsed and manipulated and so on with Perl to remove crap and finally
produce one tab delimited file.  Once that file was created, another simple
program would go through and use the data in each line for an INSERT
statement that would put the data in a table.  This table also has an Idx
field that is an auto-incrementing primary key for the table.
 
 Each night at 3:30 am, a program would run and would go through the same
process for each client.  I never timed it, but it could take something like
30-60 seconds per client, but timing wasn't a major issue, since it had a
LONG time from then until new data would be inserted into the DB.  The
SELECT statements to pull the data for each client involve a number of AND
and OR conditions.  The first one of these would create a temporary table
with its results, then another long SELECT statement would create a 2nd
temporary table by filtering the data out more.  This would continue for a
few temporary tables until the data was filtered.  Then it would be packaged
up and encrypted, then sent out to the client, who has a program on his
computer to read that data and print it out if desired.
 
 This has worked, but for a number of reasons, a once-a-day data pull and
send won't work as well with the new design.  The program on the clients'
computers will be able to access a special server just for them directly.
(I know the concept of a server for each human client sounds inefficient,
but it actually improves operations in a number of ways.)  So each server
will only have to provide data for one client.
 
 The big difference is that I'd like to make it so they can access the data
live, or almost live.  I don't mean all the data, but the subset that meets
their needs.  In other words, the equivalent of what was sent to them daily
in the old system.  Their individual servers will still get the big
tab-delimited file that will still be INSERTed in to their DB line by line.
But I'd like to be able to select from the new data as it comes in, once
it's been given a new number in the Idx field.
 
 Is there any way to run a row of data through SELECT queries as it is
being INSERTed into a table -- or just after?
 
 The reason for doing this, instead of INSERTing all the data, then running
a program is that as the database grows, pulling out the data will take
longer and longer, so if there were a way to screen data as it comes in,
that would make it easier to provide instantly available data.
 
 I also know my knowledge of MySQL is quite limited, so if this can be done
in better ways, I'd be interested in hearing about them.
 
 Thank you!
 
 
 
 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com
 
 
 This message contains confidential information and is intended only for
the individual named.  If you are not the named addressee, you are notified
that reviewing, disseminating, disclosing, copying or distributing this
e-mail is strictly prohibited.  Please notify the sender immediately by
e-mail if you have received this e-mail by mistake and delete this e-mail
from your system. E-mail 

RE: numeric comparisons

2010-11-08 Thread Jerry Schwartz
The two different versions of MySQL might perform the calculations in a 
slightly different order.

By the way, it isn't just comparing numbers that can cause misleading results. 
Consider the following.


100 + 1.5 + 7 - 100

What is the result of that calculation?

It depends upon the native precision of the machine, how the language chooses 
to cast the operands, and how (or if) the compiler chooses to reorder the 
operations.

I would not rely on the result being the same if you changed software 
versions, let alone languages.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

-Original Message-
From: Larry Martell [mailto:larry.mart...@gmail.com]
Sent: Monday, November 08, 2010 3:12 PM
To: Michael Satterwhite
Cc: mysql@lists.mysql.com
Subject: Re: numeric comparisons

On Mon, Nov 8, 2010 at 1:01 PM, Michael Satterwhite mich...@weblore.com
wrote:
 While I don't know why the behavior changed, the comparison of floating 
 point
 values has been an issue since the day computers were created. In the last 
 10
 - 15 years, the comparisons have worked better, but it's still an issue. 
 The
 problem is that most decimal fractions do not convert well to binary.
 Actually, I'd expect the example of 4.5 to work fine as 0.5 does convert 
 well
 to a binary floating point - but that is academic.

 In the 1970's I got used to writing floating point comparisons as

x = 4.3
if(abs(x - 4.3) .lt. some tolerance)

 The tolerance is how close the values needed to be for me to consider them
 equal. An example might be

if(abs(x - 4.3) .lt. 0.1)

 Someone else might be able to shed some light on why the behavior changed.

 On Monday, November 08, 2010 12:50:16 pm Larry Martell wrote:
 I have a client that asked me to look into a situation where they were
 seeing different behavior with the same data and same sql on 2
 different servers.

 The have some sql that was comparing a double to a varchar in a where
 clause - something like this:

 where (doubleCol  varcharCol and some other conditions) or
(doubleCol  varcharCol and some other conditions)

 Let's take an example where the varcharCol has a string in like
 '4.5000' and the doubleCol has 4.5.

 On their 5.0.45 server this was working as 'expected' (i.e. the
 comparisons in the where clause were false since they are numerically
 equal). But on their 5.0.18 server the  clause was resolving to true.
  I changed the query to cast both columns to binary, so the query now
 reads:

 where (cast(doubleCol as binary)  cast(varcharCol as binary) and some
 other conditions) or
(cast(doubleCol as binary)  cast(varcharCol as binary) and
 some other conditions)

 And now on both servers the  clause of the query is resolving to true.

 I know I can strip off the trailing zeros from the varchar, but there
 must be a generic way to cast these so they compare properly as
 numbers.

 TIA
 -larry


Prompted by a reply I got off-list, I changed the cast from casting to
binary to casting to decimal(10,5) and it seems to work properly on
both servers.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Death of MySQL popularity?

2010-11-08 Thread Daevid Vincent
 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On 
 Behalf Of Johan De Meersman
 Sent: Thursday, November 04, 2010 2:26 AM
 To: jcbo...@yahoo.com
 Cc: MySQL
 Subject: Re: Death of MySQL popularity?
 
 You may want to read that again, but with your glasses on :-)
 
 Subscription means roughly commercial support. The (1) 
 subscript means
 Features only available in Commercial Editions, and is 
 noted *only* for
 Workbench SE, Enterprise Monitor, Enterprise Backup and 
 Cluster Manager.
 
 I will join you in wondering whether that means Workbench is gonna go
 payware, though.
 
 
 
 On Thu, Nov 4, 2010 at 10:13 AM, Christoph Boget
 christoph.bo...@gmail.comwrote:
 
  http://www.mysql.com/products/
 
  So the free version is going to include only MyISAM?  And 
 you won't be
  able to connect using MySQL Workbench (and presumably apps 
 like MySQL
  Query Browser)?  Otherwise you have to shell out $2k?  Wow.  I think
  it might be time to start seriously looking at Postgres...



So there definitely is some confusion out there. Can someone from the
@mysql / @oracle camp please confirm or deny the allegations?

http://blog.herlein.com/2010/11/oracle-is-the-borg-enterprise-software-deve
lopment-will-be-assimilated/

http://digitizor.com/2010/11/05/innodb-dropped-from-oracle-mysql-classic-ed
ition/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Death of MySQL popularity?

2010-11-08 Thread Michael Dykman
I think Jorge Bruehe already has weighed in.  That is about as direct
as you are likely to hear unless you have Larry Ellison on facebook.

 - michael dykman


On Mon, Nov 8, 2010 at 4:41 PM, Daevid Vincent dae...@daevid.com wrote:
 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
 Behalf Of Johan De Meersman
 Sent: Thursday, November 04, 2010 2:26 AM
 To: jcbo...@yahoo.com
 Cc: MySQL
 Subject: Re: Death of MySQL popularity?

 You may want to read that again, but with your glasses on :-)

 Subscription means roughly commercial support. The (1)
 subscript means
 Features only available in Commercial Editions, and is
 noted *only* for
 Workbench SE, Enterprise Monitor, Enterprise Backup and
 Cluster Manager.

 I will join you in wondering whether that means Workbench is gonna go
 payware, though.



 On Thu, Nov 4, 2010 at 10:13 AM, Christoph Boget
 christoph.bo...@gmail.comwrote:

  http://www.mysql.com/products/
 
  So the free version is going to include only MyISAM?  And
 you won't be
  able to connect using MySQL Workbench (and presumably apps
 like MySQL
  Query Browser)?  Otherwise you have to shell out $2k?  Wow.  I think
  it might be time to start seriously looking at Postgres...



 So there definitely is some confusion out there. Can someone from the
 @mysql / @oracle camp please confirm or deny the allegations?

 http://blog.herlein.com/2010/11/oracle-is-the-borg-enterprise-software-deve
 lopment-will-be-assimilated/

 http://digitizor.com/2010/11/05/innodb-dropped-from-oracle-mysql-classic-ed
 ition/


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com





-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Running Queries When INSERTing Data?

2010-11-08 Thread Gavin Towey
If you are selecting records within a certain time range that is a subset of 
the entire set of data, then indexes which use the timestamp column will be 
fine.

More generally: create appropriate indexes to optimize queries.

Although typically, you should design the database to be correct first, then 
identify performance bottlenecks in the production system, and fix them 
individually.  Trying to predict performance is difficult.  If you're 
concerned, then it's best to create a test that reproduces the exact situation 
of which you are unsure so you can observe real performance, and experiment 
with optimizations.


-Original Message-
From: Hal Vaughan [mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:39 AM
To: mysql@lists.mysql.com
Subject: Re: Running Queries When INSERTing Data?


On Nov 8, 2010, at 1:23 PM, Gavin Towey wrote:

 I'm not sure I understand exactly what you mean, but I think you just need to 
 keep a timestamp associated with each row as it is inserted, put an index on 
 it, then you can select new data just by using the appropriate time range.

But won't that take just as long as any other queries?  Or will it be speeded 
up because all the matching records would be adjacent to each other -- like all 
at the end?

 Also, if you're parsing files into tab delimited format, you don't need to 
 write a separate parser to insert rows line by line.  MySQL has LOAD DATA 
 INFILE which takes delimited text files and inserts data in bulk, which is 
 much faster than inserting line by line.

THANKS!  Is this new?  I never read about it before, but then again, from about 
2005-2006 until now the system was running smoothly and that's a long gap with 
no programming work!


Hal


 Regards,
 Gavin Towey

 -Original Message-
 From: Hal Vaughan [mailto:h...@halblog.com]
 Sent: Monday, November 08, 2010 10:18 AM
 To: mysql@lists.mysql.com
 Subject: Running Queries When INSERTing Data?

 I'm redesigning some software that's been in use since 2002.  I'll be working 
 with databases that will start small and grow along the way.

 In the old format, data would come to us in mega-big text files that had to 
 be parsed and manipulated and so on with Perl to remove crap and finally 
 produce one tab delimited file.  Once that file was created, another simple 
 program would go through and use the data in each line for an INSERT 
 statement that would put the data in a table.  This table also has an Idx 
 field that is an auto-incrementing primary key for the table.

 Each night at 3:30 am, a program would run and would go through the same 
 process for each client.  I never timed it, but it could take something like 
 30-60 seconds per client, but timing wasn't a major issue, since it had a 
 LONG time from then until new data would be inserted into the DB.  The SELECT 
 statements to pull the data for each client involve a number of AND and OR 
 conditions.  The first one of these would create a temporary table with its 
 results, then another long SELECT statement would create a 2nd temporary 
 table by filtering the data out more.  This would continue for a few 
 temporary tables until the data was filtered.  Then it would be packaged up 
 and encrypted, then sent out to the client, who has a program on his computer 
 to read that data and print it out if desired.

 This has worked, but for a number of reasons, a once-a-day data pull and send 
 won't work as well with the new design.  The program on the clients' 
 computers will be able to access a special server just for them directly.  (I 
 know the concept of a server for each human client sounds inefficient, but it 
 actually improves operations in a number of ways.)  So each server will only 
 have to provide data for one client.

 The big difference is that I'd like to make it so they can access the data 
 live, or almost live.  I don't mean all the data, but the subset that meets 
 their needs.  In other words, the equivalent of what was sent to them daily 
 in the old system.  Their individual servers will still get the big 
 tab-delimited file that will still be INSERTed in to their DB line by line.  
 But I'd like to be able to select from the new data as it comes in, once it's 
 been given a new number in the Idx field.

 Is there any way to run a row of data through SELECT queries as it is being 
 INSERTed into a table -- or just after?

 The reason for doing this, instead of INSERTing all the data, then running a 
 program is that as the database grows, pulling out the data will take longer 
 and longer, so if there were a way to screen data as it comes in, that would 
 make it easier to provide instantly available data.

 I also know my knowledge of MySQL is quite limited, so if this can be done in 
 better ways, I'd be interested in hearing about them.

 Thank you!



 Hal
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:

Re: Death of MySQL popularity?

2010-11-08 Thread Anders Karlsson
I was a MySQL Sales Engineer up til a few weeks ago. I spent 6+ year at 
MySQL. MySQL Classic never ever had InnoDB in it. Actually, the reason 
for the existence of MySQL Classic was just that: MySQL without InnoDB 
for OEMs.


If you wanted a non-GPL MySQL, you had to pay for it. And if MySQL 
wanted a non-GPL InnoDB (in the old days, before Oracle), MySQL had to 
pay for it. So for the customers that only embedded MyISAM, they could 
get by by not having InnoDB included, which would lower the cost for 
MySQL, as there was no InnoDB licence to pay.


Note in the above that this is OEM / Embedded only. For MySQL Enterprise 
customers InnoDB was always included. Why? Because this was a GPL 
distribution, using a GPL InnoDB, so no need for a InnoDB licence. 
Simple as that.


In the old scheme then, when I was around, MySQL came in a few different 
shapes:

- MySQL Embedded / OEM
-- With or without InnoDB. Two different prices (MySQL Classic being the 
low end then). Commercial icence.

- MySQL Enterprise
-- The supported MySQL version. Different flavours mainly using 
different SLAs and different MySQL Enterprise Monitor functionalities. 
GPL Licence.
- MySQL Community Edition - The good old GPL downloadable version. GPL 
Licence.


/Karlsson
Michael Dykman skrev 2010-11-08 22:47:

I think Jorge Bruehe already has weighed in.  That is about as direct
as you are likely to hear unless you have Larry Ellison on facebook.

  - michael dykman


On Mon, Nov 8, 2010 at 4:41 PM, Daevid Vincentdae...@daevid.com  wrote:

-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
Behalf Of Johan De Meersman
Sent: Thursday, November 04, 2010 2:26 AM
To: jcbo...@yahoo.com
Cc: MySQL
Subject: Re: Death of MySQL popularity?

You may want to read that again, but with your glasses on :-)

Subscription means roughly commercial support. The (1)
subscript means
Features only available in Commercial Editions, and is
noted *only* for
Workbench SE, Enterprise Monitor, Enterprise Backup and
Cluster Manager.

I will join you in wondering whether that means Workbench is gonna go
payware, though.



On Thu, Nov 4, 2010 at 10:13 AM, Christoph Boget
christoph.bo...@gmail.comwrote:


http://www.mysql.com/products/

So the free version is going to include only MyISAM?  And

you won't be

able to connect using MySQL Workbench (and presumably apps

like MySQL

Query Browser)?  Otherwise you have to shell out $2k?  Wow.  I think
it might be time to start seriously looking at Postgres...



So there definitely is some confusion out there. Can someone from the
@mysql / @oracle camp please confirm or deny the allegations?

http://blog.herlein.com/2010/11/oracle-is-the-borg-enterprise-software-deve
lopment-will-be-assimilated/

http://digitizor.com/2010/11/05/innodb-dropped-from-oracle-mysql-classic-ed
ition/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



a query not using index

2010-11-08 Thread wroxdb
Hello,

I have a query below:

mysql select * from ip_test where 3061579775 between startNum and endNum;
+++-+--+--++
| startNum   | endNum | country | province | city | isp|
+++-+--+--++
| 3061514240 | 3061579775 | 中国| 河南 |  | 联通   |
+++-+--+--++


the desc shows it isn't using the index:

mysql desc select * from ip_test where 3061579775 between startNum and endNum;
++-+-+--+-+--+-+--++-+
| id | select_type | table   | type | possible_keys   | key  | key_len
| ref  | rows   | Extra   |
++-+-+--+-+--+-+--++-+
|  1 | SIMPLE  | ip_test | ALL  | startNum,endNum | NULL | NULL
| NULL | 396528 | Using where |
++-+-+--+-+--+-+--++-+
1 row in set (0.01 sec)


the table structure is:

CREATE TABLE `ip_test` (
  `startNum` double(20,0) default NULL,
  `endNum` double(20,0) default NULL,
  `country` varchar(50) NOT NULL default '',
  `province` varchar(50) NOT NULL default '',
  `city` varchar(50) NOT NULL default '',
  `isp` varchar(100) default NULL,
  KEY `startNum` (`startNum`),
  KEY `endNum` (`endNum`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8



please help, thanks in advance.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL clustering and licensing

2010-11-08 Thread Machiel Richards
Good day all

Maybe someone can assist me here as I am not sure where to get this
information from and I need this for a proposed environment for a
client.


1.  The client will have 2 new machines, had a look at the specs and
it is fairly good considering it will be dedicated to MySQL.  The
question is, is the solution able to use 2 nodes without shared storage
and be clustered?

2.  Is there further licensing implications?

3.  Is this part of the Enterprize edition license ? Currently there
is a Mysql enterprize Gold support contract in place, however not sure
it requires aditional licences for clustering or not.


 I would really appreciate any suggestions here, I am busy trying to
get the info from the net as well but I am pressured for responses.

Regards
Machiel

 



Re: a query not using index

2010-11-08 Thread Johan De Meersman
Indexes  typically only work on the left-hand-side. Rewrite as
select * from ip_test where startNum = 3061579775 and endNum = 3061579775;

Magic will happen.


2010/11/9 wroxdb wro...@gmail.com

 Hello,

 I have a query below:

 mysql select * from ip_test where 3061579775 between startNum and endNum;
 +++-+--+--++
 | startNum   | endNum | country | province | city | isp|
 +++-+--+--++
 | 3061514240 | 3061579775 | 中国| 河南 |  | 联通   |
 +++-+--+--++


 the desc shows it isn't using the index:

 mysql desc select * from ip_test where 3061579775 between startNum and
 endNum;

 ++-+-+--+-+--+-+--++-+
 | id | select_type | table   | type | possible_keys   | key  | key_len
 | ref  | rows   | Extra   |

 ++-+-+--+-+--+-+--++-+
 |  1 | SIMPLE  | ip_test | ALL  | startNum,endNum | NULL | NULL
 | NULL | 396528 | Using where |

 ++-+-+--+-+--+-+--++-+
 1 row in set (0.01 sec)


 the table structure is:

 CREATE TABLE `ip_test` (
  `startNum` double(20,0) default NULL,
  `endNum` double(20,0) default NULL,
  `country` varchar(50) NOT NULL default '',
  `province` varchar(50) NOT NULL default '',
  `city` varchar(50) NOT NULL default '',
  `isp` varchar(100) default NULL,
  KEY `startNum` (`startNum`),
  KEY `endNum` (`endNum`)
 ) ENGINE=MyISAM DEFAULT CHARSET=utf8



 please help, thanks in advance.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel