Re: design choice - quite many tables

2007-05-22 Thread Przemysław Klein

Martijn Tonies wrote:

Hi,

  

I'm working on quite big database. It consists of about 200 tables.
Additionaly about 50 tables are per year (because of annual data). It
means every year new 50 tables will have to appear in application. And
now I have a question. Should I use separate databases for "annual" data
(i.e. db2006, db2007, etc...) (i don't need constraints on that (annual)
tables) or put all the tables in one database? Is there any way to
'catalogue'/organize tables within one database (namespace/schema)?

Any thoughts?



Yes, in my opinion, you should use the same tables for each year. So no
"tables per year" or "databases per year", unless there is a very very
specific
reason for this.

Having tables on a per-year basis also means you cannot do cross-year
queries easily and you have to adjust your queries according to the current
year.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


  
The reason of distribute annual data into different tables is that they 
are NOT small. They store business documents in my company and can count 
about 500k rows (and will grow each year). After performance tests we 
did, it occurs that keeping those data in one table (with additional 
column 'year') wouldn't meet our response time requirements.
I realize that this approach is not proper from relational point of 
view, but it seems that we must separate annual data. Now, the question 
is: if we should keep them in one database (and be prepared for database 
with approx 500 tables after 3-4 years) or in multiple databases.


Regards,

--
_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
Przemek Klein ([EMAIL PROTECTED])


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump running out out of memory

2007-05-22 Thread Linuxinfoplus

Fredrik Carlsson 写道:


Hi,

Yes I'm using the -q option with mysqldump, but the result is the same.

This is a replicated environment and the master is running FreeBSD and 
the slave NetBSD and on the master which only has InnoDB tables there 
is no problems to run a dump but the machine is to loaded so we can 
not afford to run the dump there. The tables on the slave is mostly 
Myisam, maybe there is some kind of memory buffer that I'm missing to 
tune on NetBSD but i can't figure out what it can be, I've already 
increased the ulimit values for the session running the dump.


// Fredrik


Atle Veka wrote:

Have you tried this flag?
-q, --quick Don't buffer query, dump directly to stdout.



On Sun, 29 Apr 2007, Fredrik Carlsson wrote:


Hi,

I have a problem with mysqldump, its exiting with the message

mysqldump: Error 5: Out of memory (Needed 22042208 bytes) when dumping
table `theTable` at row: 2990911

I have searched the archives and tried what people suggested but 
nothing

seems to work, I'm dumping using the following command:
/usr/pkg/bin/mysqldump -h localhost -B theDatabase --skip-opt
--max_allowed_packet=1024M -q

Any tips on how to get the dump running? the dump should be about
15-20GB in size the fully dumped, but I never seems to get there.

// Fredrik Carlsson











how many free memory?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help on selecting a View with 3 Billions rows !

2007-05-22 Thread Chris Prakoso
 Hi all,




I am having a big performance performance problem here, and I was wondering if 
you could give me some advise here.


I have 2 big static tables, one with 65K rows, the other with 54K rows.
I need to do a calculation for each combination of these two table
rows.


So what I did was to create a view like so:




  select 


`c`.`TRACT` AS `TRACT`,


`c`.`LNG` AS `tlng`,


`c`.`LAT` AS `tlat`,


`p`.`NAME` AS `name`,


`p`.`LNG` AS `lng`,


`p`.`LAT` AS `lat`,


`Calc_Distance`(`c`.`LAT`,`c`.`LNG`,`p`.`LAT`,`p`.`LNG`) AS `distance` 


  from 


(`tracts` `c` join `parks` `p`);




This give me a view with more than 3,500,000,000  rows !



Now, the second part of this exercise is to grab only a set or rows, where the 
distance is less than 50.


So, I thought I would create another view like so:




  select 


`t`.`TRACT` AS `TRACT`,


`t`.`tlng` AS `tlng`,


`t`.`tlat` AS `tlat`,


`t`.`name` AS `name`,


`t`.`lng` AS `lng`,


`t`.`lat` AS `lat`,


`t`.`distance` AS `distance` 


  from 


`tractparkdistance` `t` 


  where 


(`t`.`distance` < 50);




tractparkdisctance is the name of the view.




But opening this view takes 'a lot of time' !  I just couldn't wait for it.



So, I though I would try to export this to an external file via
SELECT INTO, and re-import the resulting file back to a new table.




So I did like so:




select *


into outfile "park_distances"


from tractparkdistance


where distance < 50;



Running this statement took more than 12 hours, and still counting
until I killed the process. So far it has produced an 800 MB file.


Moreover, I still need to do a Mean calculation from that 'limited' set of 
data, and still do more calculations.


Next try, I using INSERT INTO SELECT like this:

insert into park_distance_radius50s(tract,tlng,tlat,name,lng,lat,distance)
select tract,tlng,tlat,name,lng,lat,distance
from tractparkdistance
where distance < 50

This was running a very long time as well.



I think I'm out of my depth here.  Anybody has any idea on this ?




Thanks very much in advance !




Regards,


Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB and raw devices and DRBD question

2007-05-22 Thread Mariella Petrini


Hi ALL,


I would like to use INNODB with raw devices for tablespace in the 

innodb_data_file_path both on MySQL 5.0.x and MySQL 5.1.x.

Is it  possible to use DRBD (Distributed Raw Block Device) 
with InnoDB and raw devices.

If yes, which are the constraints ?



Thanks in advance for your help,

Mariella

   
-
Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, 
photos & more. 

Re: dump db without locking up the server

2007-05-22 Thread Dan Buettner

Tim, it's a gnarly problem that most DBAs struggle with in some form or
another, whether using MySQL or another database package.

If you're using only MyISAM tables, MySQL's free, included 'mysqlhotcopy'
script might work for you, as it's generally a bit faster than mysqldump in
my experience.  If you're on InnoDB, there is a commercial product that
offers live backups ('InnoBackup' I think).  No experience with that myself.

You could also look at setting up a second database server replicating from
the first, and run your backups off the second server.

There's also the subject of "consistent snapshot" vs. a simple serial backup
of your tables, which can be a tricky thing to work out satisfactorily
without complete database locks.

Dan





On 5/22/07, tim h <[EMAIL PROTECTED]> wrote:


is there a safe way to dump/backup a live database without disrupting
service?  when i run mysqldump the whole server comes to a crawl and
queries
start taking 60+ seconds to complete.

is there a way to make mysqldump run at low priority?
worst case scenario is i miss the queries that took place during the
backup
right?

Tim



Re: Bash script array from MySQL query - HELP Please!!!

2007-05-22 Thread BJ Swope

I would look at the 15th URL to see if there are specials in there that are
breaking the hash somehow.

On 5/22/07, Ben Benson <[EMAIL PROTECTED]> wrote:




I'm having problems getting a monitoring script to run.



I've put the troublesome bit in a separate script just to test, and it
goes
like this:



declare -a HNdeclares the array "HN"



HN=(`echo "SELECT url FROM hosts" | mysql --user=netmon --password=n3tm0n
--skip-column-names check_http`) runs the query and assigns each record to
a
new element in the array



echo ${#HN} echo's number of elements in array



for ((i=0;i<${#HN};i++)); do



echo ${HN[${i}]}  echo value of each element.



done



Seems simple enough yeah?! Well if I run echo "SELECT url FROM hosts" |
mysql --user=user --password=pass --skip-column-names check_http at the
command line, i get all of the records - 32. If I run the script above, it
simply refuses to put more than 14 elements in the array.



Then, to confuse it even more, if I sort the query, it gives a different
amount depending on what its sorted by!! For example, if I sort it by
'url'
it seems to generate 569 elements!



Can anyone please spot the undoubtedly obvious error I've made here?! I've
been scratching my head for days, to no avail!



Many thanks in advance,



Ben Benson







--
We are all slave to our own paradigm. -- Joshua Williams

If the letters PhD appear after a person's name, that person will remain
outdoors even after it's started raining. -- Jeff Kay


dump db without locking up the server

2007-05-22 Thread tim h

is there a safe way to dump/backup a live database without disrupting
service?  when i run mysqldump the whole server comes to a crawl and queries
start taking 60+ seconds to complete.

is there a way to make mysqldump run at low priority?
worst case scenario is i miss the queries that took place during the backup
right?

Tim


corruption in db. myisam bad? innodb good?

2007-05-22 Thread tim h

hi. database is myisam,  5.8Gb, 7mil records.
recently had some corruption i think due to mysqld service failure.
10 tabes were crashed.

question --
how can i prevent or minimize this?
Will switching to innodb help?
Will converting all my queries to transactions help?

thanks.

--
Tim H
Berkeley, CA


Re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-22 Thread David Sparks
Gmail User wrote:
> I had perfectly working complex queries both with LEFT JOIN and without
> and they were returning results in under a second. After upgrade to
> 5.0.x, the same queries would return results in 20-30 second range.

I had a similar problem once (not related to 4.x->5.x though), it turns
out after some maintenance mysql had lost the stats for the table and
was doing a terrible job in optimizing queries.  A quick analyze of all
the tables in the db fixed the problem.

mysqlcheck -h$host -u$user -p$pass --analyze $dbname

ds



> Through trial and error, I discovered that in case of SELECT ... FROM
> table1, table2 ... ORDER BY table2.column will be very slow (10s of
> seconds), while the same query ordered by table1.column will be in under
> 1 second. If I switch the order of tables in FROM, the same will hold
> true in reverse order. Is this a bug or I missed something in my
> googling? More importantly, what can I do change this--I need those
> sorts! :-(
> 
> I got same results with 5.0.22, 5.0.27 (Linux).
> 
> 
> TIA,
> 
> Ed
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with compex query

2007-05-22 Thread Juan Eduardo Moreno

Patricio,

2) What you meant with " indexes for the query"?


A: Create some indexes for a query in order to improve the performance.


Regards,
Juan Eduardo

On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED]> wrote:


Juan Eduardo,

1) The versions are the same.
2) What you meant with " indexes for the query"?
3) Im goint to try that.
4) Good Idea.

Thanks.

- Mensaje Original -
De: "Juan Eduardo Moreno" <[EMAIL PROTECTED]>
Para: "Patricio A. Bruna" <[EMAIL PROTECTED]>
Cc: mysql@lists.mysql.com
Enviados: martes 22 de mayo de 2007 13H31 (GMT-0400) America/Santiago
Asunto: Re: Problem with compex query

Hi Patricio,

Your explain is scary.No use of index...Please, ask to programers in
order to create some index in the schema.

Development Server has the same version of MySQL production environment?.
4.0.18 ?

Resume:

1) Please check the version of mysql ( prod and development)
2) Please use indexes for the query.
3) For the session in Websphere please ask to programers in order to set
the enviroment only for the query. Use explicit code for that.
SET SESSION SQL_BIG_SELECTS=1;
SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT;
4) In order to improve the insert, try to  commit every ( for example)
1 records. Ask to programers for provide some cursor or something like
that.


Regards,
Juan

On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED] > wrote:
>
> Juan Eduardo,
>
> Great to hear about you :)
> One thing you must know is that i can run this query from a mysql
> client, without the insert part.
> The problem only happens when is run from the J2EE (Websphere - drp)
> application.
>
> I run the query as you asked, here are some results:
>
> 
++-+-+---+-+-+--+---+
>
> | drp_id_sku | drp_id_deposito | drp_volumen | 1 | drp_dia | drp_mes |
> drp_anno | drp_fecha_dia_cargado |
>
> 
++-+-+---+-+-+--+---+
> |161 |  35 | 1.6 | 1 |  12 |   5
> | 2007 | 2007-05-12 00:00:00   |
> |161 |  20 | 1.5 | 1 |   2 |   5
> | 2007 | 2007-05-02 00:00:00   |
> |161 |  22 | 0.2 | 1 |  11 |   5
> | 2007 | 2007-05-11 00:00:00   |
> |161 |  13 | 0.2 | 1 |   7 |   5
> | 2007 | 2007-05-07 00:00:00   |
> |161 |  16 | 2.2 | 1 |   9 |   5
> | 2007 | 2007-05-09 00:00:00   |
> |161 |  35 | 4.0 | 1 |   3 |   5
> | 2007 | 2007-05-03 00:00:00   |
> |161 |  16 |24.0 | 1 |   4 |   5
> | 2007 | 2007-05-04 00:00:00   |
> |161 |   2 | 0.2 | 1 |   9 |   5
> | 2007 | 2007-05-09 00:00:00   |
> |163 |  35 |16.6 | 1 |  11 |   5
> | 2007 | 2007-05-11 00:00:00   |
> |163 |  36 | 2.2 | 1 |   4 |   5
> | 2007 | 2007-05-04 00:00:00   |
> |163 |  16 |-2.4 | 1 |   8 |   5
> | 2007 | 2007-05-08 00:00:00   |
> |163 |  35 | 8.8 | 1 |   2 |   5
> | 2007 | 2007-05-02 00:00:00   |
> |163 |  32 |13.0 | 1 |   8 |   5
> | 2007 | 2007-05-08 00:00:00   |
> |163 |  34 | 7.6 | 1 |   7 |   5
> | 2007 | 2007-05-07 00:00:00   |
>
> And the EXPLAIN:
>
>
> 
+---+--+---+--+-+--+---+---+
> | table | type | possible_keys | key  | key_len | ref  | rows  |
> Extra |
>
> 
+---+--+---+--+-+--+---+---+
> | d | ALL  | NULL  | NULL |NULL | NULL |37 | Using
> temporary   |
> | md| ALL  | NULL  | NULL |NULL | NULL |32 | Using
> where   |
> | vv| ALL  | NULL  | NULL |NULL | NULL | 12694 | Using
> where   |
> | s | ALL  | NULL  | NULL |NULL | NULL |   104
> |   |
> | ms| ALL  | NULL  | NULL |NULL | NULL |   184 | Using
> where; Distinct |
>
> 
+---+--+---+--+-+--+---+---+
> 5 rows in set (0.00 sec)
>
>
> Any ideas?
>
> - "Juan Eduardo Moreno" < [EMAIL PROTECTED]> escribió:
> > Hi Patricio,
> >
> > Some options are to  prevent programmers/users  make a "bad" queries
> > into
> > the database' SQL_BIG_SELECTS = 0 | 1
> >
> > The documentation say :
> > "If set to 0, MySQL will abort if a SELECT is attempted that probably
> > will
> > take a very long time. This is useful when an inadvisable WHERE
> > statement
> > has been issued. A big query is defined as a SELECT that probably will
> > have
> > to examine more than max_join_size rows. The default value

Re: design choice - quite many tables

2007-05-22 Thread Martijn Tonies
Hi,

> I'm working on quite big database. It consists of about 200 tables.
> Additionaly about 50 tables are per year (because of annual data). It
> means every year new 50 tables will have to appear in application. And
> now I have a question. Should I use separate databases for "annual" data
> (i.e. db2006, db2007, etc...) (i don't need constraints on that (annual)
> tables) or put all the tables in one database? Is there any way to
> 'catalogue'/organize tables within one database (namespace/schema)?
>
> Any thoughts?

Yes, in my opinion, you should use the same tables for each year. So no
"tables per year" or "databases per year", unless there is a very very
specific
reason for this.

Having tables on a per-year basis also means you cannot do cross-year
queries easily and you have to adjust your queries according to the current
year.

Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with compex query

2007-05-22 Thread Patricio A. Bruna
Juan Eduardo, 

1) The versions are the same. 
2) What you meant with " indexes for the query"? 
3) Im goint to try that. 
4) Good Idea. 

Thanks. 

- Mensaje Original - 
De: "Juan Eduardo Moreno" <[EMAIL PROTECTED]> 
Para: "Patricio A. Bruna" <[EMAIL PROTECTED]> 
Cc: mysql@lists.mysql.com 
Enviados: martes 22 de mayo de 2007 13H31 (GMT-0400) America/Santiago 
Asunto: Re: Problem with compex query 

Hi Patricio, 

Your explain is scary.No use of index...Please, ask to programers in order 
to create some index in the schema. 

Development Server has the same version of MySQL production environment?. 
4.0.18 ? 

Resume: 

1) Please check the version of mysql ( prod and development) 
2) Please use indexes for the query. 
3) For the session in Websphere please ask to programers in order to set the 
enviroment only for the query. Use explicit code for that. 
SET SESSION SQL_BIG_SELECTS=1; 
SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT; 
4) In order to improve the insert, try to commit every ( for example) 1 
records. Ask to programers for provide some cursor or something like that. 


Regards, 
Juan 


On 5/22/07, Patricio A. Bruna < [EMAIL PROTECTED] > wrote: 

Juan Eduardo, 

Great to hear about you :) 
One thing you must know is that i can run this query from a mysql client, 
without the insert part. 
The problem only happens when is run from the J2EE (Websphere - drp) 
application. 

I run the query as you asked, here are some results: 

++-+-+---+-+-+--+---+
 
| drp_id_sku | drp_id_deposito | drp_volumen | 1 | drp_dia | drp_mes | drp_anno 
| drp_fecha_dia_cargado | 
++-+-+---+-+-+--+---+
 
| 161 | 35 | 1.6 | 1 | 12 | 5 | 2007 | 2007-05-12 00:00:00 | 
| 161 | 20 | 1.5 | 1 | 2 | 5 | 2007 | 2007-05-02 00:00:00 | 
| 161 | 22 | 0.2 | 1 | 11 | 5 | 2007 | 2007-05-11 00:00:00 | 
| 161 | 13 | 0.2 | 1 | 7 | 5 | 2007 | 2007-05-07 00:00:00 | 
| 161 | 16 | 2.2 | 1 | 9 | 5 | 2007 | 2007-05-09 00:00:00 | 
| 161 | 35 | 4.0 | 1 | 3 | 5 | 2007 | 2007-05-03 00:00:00 | 
| 161 | 16 | 24.0 | 1 | 4 | 5 | 2007 | 2007-05-04 00:00:00 | 
| 161 | 2 | 0.2 | 1 | 9 | 5 | 2007 | 2007-05-09 00:00:00 | 
| 163 | 35 | 16.6 | 1 | 11 | 5 | 2007 | 2007-05-11 00:00:00 | 
| 163 | 36 | 2.2 | 1 | 4 | 5 | 2007 | 2007-05-04 00:00:00 | 
| 163 | 16 | -2.4 | 1 | 8 | 5 | 2007 | 2007-05-08 00:00:00 | 
| 163 | 35 | 8.8 | 1 | 2 | 5 | 2007 | 2007-05-02 00:00:00 | 
| 163 | 32 | 13.0 | 1 | 8 | 5 | 2007 | 2007-05-08 00:00:00 | 
| 163 | 34 | 7.6 | 1 | 7 | 5 | 2007 | 2007-05-07 00:00:00 | 

And the EXPLAIN: 

+---+--+---+--+-+--+---+---+
 
| table | type | possible_keys | key | key_len | ref | rows | Extra | 
+---+--+---+--+-+--+---+---+
 
| d | ALL | NULL | NULL | NULL | NULL | 37 | Using temporary | 
| md | ALL | NULL | NULL | NULL | NULL | 32 | Using where | 
| vv | ALL | NULL | NULL | NULL | NULL | 12694 | Using where | 
| s | ALL | NULL | NULL | NULL | NULL | 104 | | 
| ms | ALL | NULL | NULL | NULL | NULL | 184 | Using where; Distinct | 
+---+--+---+--+-+--+---+---+
 
5 rows in set (0.00 sec) 


Any ideas? 

- "Juan Eduardo Moreno" < [EMAIL PROTECTED] > escribió: 
> Hi Patricio, 
> 
> Some options are to prevent programmers/users make a "bad" queries 
> into 
> the database' SQL_BIG_SELECTS = 0 | 1 
> 
> The documentation say : 
> "If set to 0, MySQL will abort if a SELECT is attempted that probably 
> will 
> take a very long time. This is useful when an inadvisable WHERE 
> statement 
> has been issued. A big query is defined as a SELECT that probably will 
> have 
> to examine more than max_join_size rows. The default value for a new 
> connection is 1 (which will allow all SELECT statements)." 
> 
> For testing try this : 
> 
> 1) 
> 
> SET SESSION SQL_BIG_SELECTS=1; 
> SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT; 
> Run the query; 
> 
> 2) Send your results and explain of query ( explain select ) 
> 
> Regards, 
> Juan 
> 
> 
> On 5/22/07, Patricio A. Bruna < [EMAIL PROTECTED] > wrote: 
> > 
> > Friends, 
> > im having troubles with the following query: 
> > 
> > --- 
> > INSERT drp_volumen_venta_diaria_deposito (drp_id_sku, 
> drp_id_deposito, 
> > drp_volumen, drp_es_cct, drp_dia, drp_mes, drp_anno, 
> drp_fecha_dia_cargado ) 
> > SELECT DISTINCT s.drp_id_sku, d.drp_id_deposito, vv.drp_volumen, 1, 
> > vv.drp_dia, vv.drp_mes, vv.drp_anno,vv.drp_fecha_dia_cargado 
> > FROM drp_volumen_venta_diaria_tmp vv, drp_sku s, drp_deposito d, 
> > drp_mapeo_sku ms, drp_mapeo_deposito md 
> > WHERE vv.drp_codigo_sku = ms.drp_cod_sku_odyssey AND 
> ms.drp_cod_sku_sap = 
> > s.drp_codigo_sku 
> > AND REPLACE

Re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-22 Thread Gmail User

When you upgraded from 4.1 to 5.0, did you do an in-place upgrade, or mysqldump 
your data and then re-import?


As replied to Sebastian's post, in-place.


Try using either mysqldump or mysql-administrator to dump out your data to an 
.sql file. Then re-import all of your data into 5.x. You will see a significant 
difference in your query speeds once you do this.


Will this still hold true, even if I dump data out of MySQL 5 and
re-import it, or do I need to downgrade first?


As to your query cache, make sure that it's on (on by default) and, based on 
your tables, either your MyISAM key_buffer_size or your InnoDB buffer_pool_size 
are correct for your server's amount of RAM.


I guess I will have to check if 8MB is good on 500MB RAM. I did some
research back when messing with 4.1, so a good time to do it again.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and Textarea

2007-05-22 Thread Hank

On 5/22/07, sam rumaizan <[EMAIL PROTECTED]> wrote:


  I'm just a php beginner.
So please be patient with my stupid questions.

What am I missing in this code that causing the function can't update the
textarea?

Update button erase the old information from mysql database and replace it
with nothing. Basically it can't read what is inside the Textarea box. Why??

Read the highlighted code.



Your problem is in this line:

mysql_real_escape_string($_REQUEST['Assign_Engineer'][' .$id .
']['Job_Title']);


you want something more like (may or may not work):

mysql_real_escape_string($_REQUEST["Assign_Engineer[$id]['Job_Title']"]);

Since this is a PHP problem, and you can't figure it out, I'd suggest moving
your request to a PHP list.

-Hank


Re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-22 Thread Gmail User

possible you had set up some query cache in 4, but not currently in 5?


may not be optimized, but yes, query cache is enabled, all 25 MB of it. :-)


how did you 'upgraded' your data?


regrettably, in-place.

interestingly, I was recovering after server crash that chopped of a
table. after upgrading the server (in-place), I re-read the corrupt
table from script dumped by mysqlbinlog. it is THAT table that is
causing me grief. I thought it was some missing indices, but I have
indices on all columns I use in WHERE.


what means this exactly?
in reverse ordered tables, query is fast on second or on first table order?


'select ... from table1, table2, table3 ... order by table1.column'  is FAST

'select ... from table1, table2, table3 ... order by table2.column' is SLOW



did your tried an EXPLAIN?


yes, thanks for reminding me to use it. I compared the two; the slow
one uses temporary table and filesort; the fast one does not. Both use
where and all select types are SIMPLE.

So, with your help, I know why it is slow. Is there a way to out of
this without downgrading the server?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with compex query

2007-05-22 Thread Juan Eduardo Moreno

Hi Patricio,

Your explain is scary.No use of index...Please, ask to programers in
order to create some index in the schema.

Development Server has the same version of MySQL production environment?.
4.0.18?

Resume:

1) Please check the version of mysql ( prod and development)
2) Please use indexes for the query.
3) For the session in Websphere please ask to programers in order to set the
enviroment only for the query. Use explicit code for that.
SET SESSION SQL_BIG_SELECTS=1;
SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT;
4) In order to improve the insert, try to  commit every ( for example) 1
records. Ask to programers for provide some cursor or something like that.


Regards,
Juan

On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED] > wrote:


Juan Eduardo,

Great to hear about you :)
One thing you must know is that i can run this query from a mysql client,
without the insert part.
The problem only happens when is run from the J2EE (Websphere - drp)
application.

I run the query as you asked, here are some results:

++-+-+---+-+-+--+---+

| drp_id_sku | drp_id_deposito | drp_volumen | 1 | drp_dia | drp_mes |
drp_anno | drp_fecha_dia_cargado |

++-+-+---+-+-+--+---+
|161 |  35 | 1.6 | 1 |  12 |   5 |
2007 | 2007-05-12 00:00:00   |
|161 |  20 | 1.5 | 1 |   2 |   5 |
2007 | 2007-05-02 00:00:00   |
|161 |  22 | 0.2 | 1 |  11 |   5 |
2007 | 2007-05-11 00:00:00   |
|161 |  13 | 0.2 | 1 |   7 |   5 |
2007 | 2007-05-07 00:00:00   |
|161 |  16 | 2.2 | 1 |   9 |   5 |
2007 | 2007-05-09 00:00:00   |
|161 |  35 | 4.0 | 1 |   3 |   5 |
2007 | 2007-05-03 00:00:00   |
|161 |  16 |24.0 | 1 |   4 |   5 |
2007 | 2007-05-04 00:00:00   |
|161 |   2 | 0.2 | 1 |   9 |   5 |
2007 | 2007-05-09 00:00:00   |
|163 |  35 |16.6 | 1 |  11 |   5 |
2007 | 2007-05-11 00:00:00   |
|163 |  36 | 2.2 | 1 |   4 |   5 |
2007 | 2007-05-04 00:00:00   |
|163 |  16 |-2.4 | 1 |   8 |   5 |
2007 | 2007-05-08 00:00:00   |
|163 |  35 | 8.8 | 1 |   2 |   5 |
2007 | 2007-05-02 00:00:00   |
|163 |  32 |13.0 | 1 |   8 |   5 |
2007 | 2007-05-08 00:00:00   |
|163 |  34 | 7.6 | 1 |   7 |   5 |
2007 | 2007-05-07 00:00:00   |

And the EXPLAIN:


+---+--+---+--+-+--+---+---+
| table | type | possible_keys | key  | key_len | ref  | rows  |
Extra |

+---+--+---+--+-+--+---+---+
| d | ALL  | NULL  | NULL |NULL | NULL |37 | Using
temporary   |
| md| ALL  | NULL  | NULL |NULL | NULL |32 | Using
where   |
| vv| ALL  | NULL  | NULL |NULL | NULL | 12694 | Using
where   |
| s | ALL  | NULL  | NULL |NULL | NULL |   104
|   |
| ms| ALL  | NULL  | NULL |NULL | NULL |   184 | Using
where; Distinct |

+---+--+---+--+-+--+---+---+
5 rows in set (0.00 sec)


Any ideas?

- "Juan Eduardo Moreno" < [EMAIL PROTECTED]> escribió:
> Hi Patricio,
>
> Some options are to  prevent programmers/users  make a "bad" queries
> into
> the database' SQL_BIG_SELECTS = 0 | 1
>
> The documentation say :
> "If set to 0, MySQL will abort if a SELECT is attempted that probably
> will
> take a very long time. This is useful when an inadvisable WHERE
> statement
> has been issued. A big query is defined as a SELECT that probably will
> have
> to examine more than max_join_size rows. The default value for a new
> connection is 1 (which will allow all SELECT statements)."
>
> For testing try this :
>
> 1)
>
> SET SESSION SQL_BIG_SELECTS=1;
> SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT;
> Run the query;
>
> 2) Send your results and explain of query ( explain select )
>
> Regards,
> Juan
>
>
> On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED] > wrote:
> >
> > Friends,
> > im having troubles with the following query:
> >
> > ---
> > INSERT drp_volumen_venta_diaria_deposito (drp_id_sku,
> drp_id_deposito,
> > drp_volumen, drp_es_cct, drp_dia, drp_mes, drp_anno,
> drp_fecha_dia_cargado )
> > SELECT DISTINCT s.drp_id_sku, d.drp_id_deposito, vv.drp_volumen, 1,
> > vv.drp_dia, vv.drp_mes, vv.drp_anno,vv.drp_fecha_dia_cargado
> > FROM drp_volumen_venta_diaria_tmp vv, drp_sku s, drp_

Bash script array from MySQL query - HELP Please!!!

2007-05-22 Thread Ben Benson
 

I'm having problems getting a monitoring script to run.  

 

I've put the troublesome bit in a separate script just to test, and it goes
like this:

 

declare -a HNdeclares the array "HN"

 

HN=(`echo "SELECT url FROM hosts" | mysql --user=netmon --password=n3tm0n
--skip-column-names check_http`) runs the query and assigns each record to a
new element in the array

 

echo ${#HN} echo's number of elements in array

 

for ((i=0;i<${#HN};i++)); do

 

echo ${HN[${i}]}  echo value of each element.

 

done

 

Seems simple enough yeah?! Well if I run echo "SELECT url FROM hosts" |
mysql --user=user --password=pass --skip-column-names check_http at the
command line, i get all of the records - 32. If I run the script above, it
simply refuses to put more than 14 elements in the array.

 

Then, to confuse it even more, if I sort the query, it gives a different
amount depending on what its sorted by!! For example, if I sort it by 'url'
it seems to generate 569 elements! 

 

Can anyone please spot the undoubtedly obvious error I've made here?! I've
been scratching my head for days, to no avail!

 

Many thanks in advance,

 

Ben Benson

 



Re: Problem with compex query

2007-05-22 Thread Patricio A. Bruna
Juan Eduardo,

Great to hear about you :)
One thing you must know is that i can run this query from a mysql client, 
without the insert part.
The problem only happens when is run from the J2EE (Websphere - drp) 
application.

I run the query as you asked, here are some results:

++-+-+---+-+-+--+---+
| drp_id_sku | drp_id_deposito | drp_volumen | 1 | drp_dia | drp_mes | drp_anno 
| drp_fecha_dia_cargado |
++-+-+---+-+-+--+---+
|161 |  35 | 1.6 | 1 |  12 |   5 | 2007 
| 2007-05-12 00:00:00   |
|161 |  20 | 1.5 | 1 |   2 |   5 | 2007 
| 2007-05-02 00:00:00   |
|161 |  22 | 0.2 | 1 |  11 |   5 | 2007 
| 2007-05-11 00:00:00   |
|161 |  13 | 0.2 | 1 |   7 |   5 | 2007 
| 2007-05-07 00:00:00   |
|161 |  16 | 2.2 | 1 |   9 |   5 | 2007 
| 2007-05-09 00:00:00   |
|161 |  35 | 4.0 | 1 |   3 |   5 | 2007 
| 2007-05-03 00:00:00   |
|161 |  16 |24.0 | 1 |   4 |   5 | 2007 
| 2007-05-04 00:00:00   |
|161 |   2 | 0.2 | 1 |   9 |   5 | 2007 
| 2007-05-09 00:00:00   |
|163 |  35 |16.6 | 1 |  11 |   5 | 2007 
| 2007-05-11 00:00:00   |
|163 |  36 | 2.2 | 1 |   4 |   5 | 2007 
| 2007-05-04 00:00:00   |
|163 |  16 |-2.4 | 1 |   8 |   5 | 2007 
| 2007-05-08 00:00:00   |
|163 |  35 | 8.8 | 1 |   2 |   5 | 2007 
| 2007-05-02 00:00:00   |
|163 |  32 |13.0 | 1 |   8 |   5 | 2007 
| 2007-05-08 00:00:00   |
|163 |  34 | 7.6 | 1 |   7 |   5 | 2007 
| 2007-05-07 00:00:00   |

And the EXPLAIN:

+---+--+---+--+-+--+---+---+
| table | type | possible_keys | key  | key_len | ref  | rows  | Extra  
   |
+---+--+---+--+-+--+---+---+
| d | ALL  | NULL  | NULL |NULL | NULL |37 | Using 
temporary   |
| md| ALL  | NULL  | NULL |NULL | NULL |32 | Using where
   |
| vv| ALL  | NULL  | NULL |NULL | NULL | 12694 | Using where
   |
| s | ALL  | NULL  | NULL |NULL | NULL |   104 |
   |
| ms| ALL  | NULL  | NULL |NULL | NULL |   184 | Using where; 
Distinct |
+---+--+---+--+-+--+---+---+
5 rows in set (0.00 sec)


Any ideas?

- "Juan Eduardo Moreno" <[EMAIL PROTECTED]> escribió:
> Hi Patricio,
> 
> Some options are to  prevent programmers/users  make a "bad" queries
> into
> the database' SQL_BIG_SELECTS = 0 | 1
> 
> The documentation say :
> "If set to 0, MySQL will abort if a SELECT is attempted that probably
> will
> take a very long time. This is useful when an inadvisable WHERE
> statement
> has been issued. A big query is defined as a SELECT that probably will
> have
> to examine more than max_join_size rows. The default value for a new
> connection is 1 (which will allow all SELECT statements)."
> 
> For testing try this :
> 
> 1)
> 
> SET SESSION SQL_BIG_SELECTS=1;
> SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT;
> Run the query;
> 
> 2) Send your results and explain of query ( explain select )
> 
> Regards,
> Juan
> 
> 
> On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED]> wrote:
> >
> > Friends,
> > im having troubles with the following query:
> >
> > ---
> > INSERT drp_volumen_venta_diaria_deposito (drp_id_sku,
> drp_id_deposito,
> > drp_volumen, drp_es_cct, drp_dia, drp_mes, drp_anno,
> drp_fecha_dia_cargado )
> > SELECT DISTINCT s.drp_id_sku, d.drp_id_deposito, vv.drp_volumen, 1,
> > vv.drp_dia, vv.drp_mes, vv.drp_anno,vv.drp_fecha_dia_cargado
> > FROM drp_volumen_venta_diaria_tmp vv, drp_sku s, drp_deposito d,
> > drp_mapeo_sku ms, drp_mapeo_deposito md
> > WHERE vv.drp_codigo_sku = ms.drp_cod_sku_odyssey AND
> ms.drp_cod_sku_sap =
> > s.drp_codigo_sku
> > AND REPLACE(UCASE(TRIM(vv.drp_codigo_deposito)),' ','')=
> > REPLACE(UCASE(TRIM(md.drp_alias_deposito_odyssey)),' ','') AND
> > REPLACE(UCASE(TRIM(md.drp_alias_deposito_sap)),' ','') =
> > REPLACE(UCASE(TRIM(d.drp_alias_deposito)),' ','')
> > AND
> > CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),
> > vv.drp_mes)
> > , if(LENGTH(vv.drp_dia)=1,CONCAT('0',vv.drp_dia),vv.drp_dia) ) >= '
> > 20070501 '
> > AND
> > CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),
> > vv.drp_mes

Re: Mysql and Textarea

2007-05-22 Thread Gerald L. Clark

sam rumaizan wrote:

   I'm just a php beginner.
  So please be patient with my stupid questions.
   
  What am I missing in this code that causing the function can’t update the textarea?
   
  Update button erase the old information from mysql database and replace it with nothing. Basically it can’t read what is inside the Textarea box. Why??
   
  Read the highlighted code.
   
  include ('./includes/header.html');

include( '../mysql_connect.php' );
  ?>



You asked this two days ago.
Have you tried any php mailing lists?

--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql and Textarea

2007-05-22 Thread Baron Schwartz

Hi Sam,

sam rumaizan wrote:


   I'm just a php beginner.
  So please be patient with my stupid questions.


It's OK :-)


  What am I missing in this code that causing the function can’t update the 
textarea?
   
  Update button erase the old information from mysql database and replace it with nothing. Basically it can’t read what is inside the Textarea box. Why??


I think the problem is that you are asking in the wrong place.  You should probably ask 
on a PHP mailing list, IRC channel, or forum, as this isn't really a MySQL question.


Cheers,
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql and Textarea

2007-05-22 Thread sam rumaizan
   I'm just a php beginner.
  So please be patient with my stupid questions.
   
  What am I missing in this code that causing the function can’t update the 
textarea?
   
  Update button erase the old information from mysql database and replace it 
with nothing. Basically it can’t read what is inside the Textarea box. Why??
   
  Read the highlighted code.
   
  



  
View
Existing Data
  

  



  Choose a Category:
$value";
}
?>
  
  

  

  

   ID
  Reference No
  Job Descriptions
  Category
  Assign Engineer
  Date Received
  Date Required
  Date Assigned
  Projected Completion Date
  Date Completed
  Manhour Spent
  Status

  
";
echo "";
echo "{$row['ID']}";
echo "{$row['Ref_No']}";
echo ' '.$row['Job_Title'] .'';
echo " ";
echo 'Updaet data
';
echo "{$row['Category']}";
echo "{$row['Assign_Engineer']}";
echo "{$row['Date_Received']}";
echo "{$row['Date_Required']}";
echo "{$row['Date_Assigned']}";
echo "{$row['ProjectedCompletionDate']}";
echo "{$row['Date_Completed']}";
echo "{$row['ManhourSpent']}";
echo "{$row['Status']}";
echo "";
echo"";
}
  ?>




   
-
Luggage? GPS? Comic books? 
Check out fitting  gifts for grads at Yahoo! Search.

Stored function problem

2007-05-22 Thread Olaf Stein
Hi All,

I have a problem that I do not quite understand.

I have a table with individuals:

CREATE TABLE `individual` (
  `ident` mediumint(8) unsigned NOT NULL auto_increment,
  `fid` mediumint(8) unsigned NOT NULL,
  `iid` mediumint(8) unsigned NOT NULL,
  PRIMARY KEY  (`ident`),
  KEY `fidiid` (`fid`,`iid`),
) ENGINE=MyISAM DEFAULT CHARSET=utf8 |

The column ident is used in other tables to reference the entries in
individual.

Then I have a function:

CREATE FUNCTION get_ident(afid INT,aiid INT)
RETURNS INT

DETERMINISTIC

BEGIN
DECLARE ret INT;
SELECT ident INTO ret FROM individual WHERE fid=afid AND iid=aiid;

RETURN(ret);
END//


When calling this function with
select get_ident(1001,1)
It works fine

When using this function in a query the system either runs out of memory or
the client loses the connection to the server (randomly with either of the 2
versions below):

select * from TABLE where ident=get_ident(1001,1);
select * from TABLE where ident=(select get_ident(1001,1));

If I use a sub select its all fine:

select * from TABLE where ident=(select ident from individual where fid=1001
and iid=1)

The table individual used to be InnoDB,  changed it to MyIsam because I
though that might be the problem (the error log indicated this), but it is
the same.


Am I missing something or is this a bug or ...?

Thanks in advance
Olaf








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with compex query

2007-05-22 Thread Juan Eduardo Moreno

Hi Patricio,

Some options are to  prevent programmers/users  make a "bad" queries into
the database' SQL_BIG_SELECTS = 0 | 1

The documentation say :
"If set to 0, MySQL will abort if a SELECT is attempted that probably will
take a very long time. This is useful when an inadvisable WHERE statement
has been issued. A big query is defined as a SELECT that probably will have
to examine more than max_join_size rows. The default value for a new
connection is 1 (which will allow all SELECT statements)."

For testing try this :

1)

SET SESSION SQL_BIG_SELECTS=1;
SET SESSION SQL_MAX_JOIN_SIZE=DEFAULT;
Run the query;

2) Send your results and explain of query ( explain select )

Regards,
Juan


On 5/22/07, Patricio A. Bruna <[EMAIL PROTECTED]> wrote:


Friends,
im having troubles with the following query:

---
INSERT drp_volumen_venta_diaria_deposito (drp_id_sku, drp_id_deposito,
drp_volumen, drp_es_cct, drp_dia, drp_mes, drp_anno, drp_fecha_dia_cargado )
SELECT DISTINCT s.drp_id_sku, d.drp_id_deposito, vv.drp_volumen, 1,
vv.drp_dia, vv.drp_mes, vv.drp_anno,vv.drp_fecha_dia_cargado
FROM drp_volumen_venta_diaria_tmp vv, drp_sku s, drp_deposito d,
drp_mapeo_sku ms, drp_mapeo_deposito md
WHERE vv.drp_codigo_sku = ms.drp_cod_sku_odyssey AND ms.drp_cod_sku_sap =
s.drp_codigo_sku
AND REPLACE(UCASE(TRIM(vv.drp_codigo_deposito)),' ','')=
REPLACE(UCASE(TRIM(md.drp_alias_deposito_odyssey)),' ','') AND
REPLACE(UCASE(TRIM(md.drp_alias_deposito_sap)),' ','') =
REPLACE(UCASE(TRIM(d.drp_alias_deposito)),' ','')
AND
CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),
vv.drp_mes)
, if(LENGTH(vv.drp_dia)=1,CONCAT('0',vv.drp_dia),vv.drp_dia) ) >= '
20070501 '
AND
CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),
vv.drp_mes)
, if(LENGTH(vv.drp_dia)=1,CONCAT('0',vv.drp_dia),vv.drp_dia) ) <= '
20070515 ';

-

I run this query in 2 servers, devel and production, which have the same
data.
I run the query in devel without problems, but in production is not
working and give me this error:

" The SELECT would examine more rows than MAX_JOIN_SIZE. Check your WHERE
and use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is ok
"

The value of MAX_JOIN_SIZE is set at: 4294967295 and SET_SQL_BIG_SELECTS
is 1.
MySQL version is 4.0.18 over Red Hat 3.


any idea why this isnt working?

thanks


RE: a function to convert a uk date to and from mysql date

2007-05-22 Thread Edward Kay


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: 22 May 2007 15:47
> To: mysql@lists.mysql.com
> Subject: a function to convert a uk date to and from mysql date
>
>
> Hi,
>
> My UK dates are this format DD/MM/ I want it reversed and
> then the seperator changed so it becomes -MM-DD
>
> I use this PHP at the moment
>
> $available_from = implode('/', array_reverse(explode('-',
> $available_from)));
>
> Ta,
>
> R.
>

First question, if the PHP works, why are you asking here for an answer?

Assuming you now want to do this transformation at the database layer,
select the three date components out separately with the string functions
and then use DATE_FORMAT() to print them how you want.

http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function
_date-format

Edward


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a function to convert a uk date to and from mysql date

2007-05-22 Thread Chris Boget

My UK dates are this format DD/MM/ I want
it reversed and then the seperator changed so it becomes
-MM-DD
I use this PHP at the moment
$available_from = implode('/', array_reverse(explode('-', 
$available_from)));


An even better solution would be:

$UKDate = '22/05/2007'
$USDate = date( 'Y-m-d', strtotime( $UKDate ));

echo 'Before: ' . $UKDate . '';
echo 'After: ' . $USDate . '';

thnx,
Chris 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



a function to convert a uk date to and from mysql date

2007-05-22 Thread ross
Hi,

My UK dates are this format DD/MM/ I want it reversed and then the 
seperator changed so it becomes -MM-DD

I use this PHP at the moment

$available_from = implode('/', array_reverse(explode('-', $available_from)));

Ta,

R.

Re: Problem with GRANT ... 'user'@'%'

2007-05-22 Thread Mogens Melander
On my servers i'm using the 'user'@'localhost' for PHP apps.
running on the local web-server. Those users allocated for
web-apps can only connect to their specific DB from localhost.

On Tue, May 22, 2007 03:19, Miguel Cardenas wrote:
>> Localhost is indeed a special value that isn't include in '%'. It's a
>> feature not a bug ;)
>>
>> Regards,
>
> Bingo! That was the point! If i connect to the server ip or server name it
> works perfectly, but if I try to connect to localhost it fails unless I
> add a
> new user specific to localhost :D
>
> Thanks for your comments
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> This message has been scanned for viruses and
> dangerous content by OpenProtect(http://www.openprotect.com), and is
> believed to be clean.
>


-- 
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



-- 
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: design choice - quite many tables

2007-05-22 Thread Brent Baisley
It seems to me that you are asking about Merge tables. A merge table allows you to "combine" 1 or more tables to appear as a single 
"virtual" table. What tables make up the merge table can modified quickly and easily, regardless of size. Then your code only needs 
to reference 1 table name.


There are limitations to merge tables (i.e. can't be InnoDB), so you need to 
read up on it to see if it will work for you.

- Original Message - 
From: "Przemysław Klein" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, May 22, 2007 3:57 AM
Subject: design choice - quite many tables



Hi All.

I'm working on quite big database. It consists of about 200 tables. Additionaly about 50 tables are per year (because of annual 
data). It means every year new 50 tables will have to appear in application. And now I have a question. Should I use separate 
databases for "annual" data (i.e. db2006, db2007, etc...) (i don't need constraints on that (annual) tables) or put all the tables 
in one database? Is there any way to 'catalogue'/organize tables within one database (namespace/schema)?

Any thoughts?
Thanks in advance...

--
_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
Przemek Klein ([EMAIL PROTECTED])


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-22 Thread J.R. Bullington
Here's a question that begs to be asked --

When you upgraded from 4.1 to 5.0, did you do an in-place upgrade, or mysqldump 
your data and then re-import?

MySQL 5.x's query cache and table optimizers work very differently than in 4.1, 
so the results you are getting are probably from 2 issues:

1) You didn't dump your data first, and only did an inplace upgrade, and;
2) Your system isn't properly optimized for using the query cache.

Try using either mysqldump or mysql-administrator to dump out your data to an 
.sql file. Then re-import all of your data into 5.x. You will see a significant 
difference in your query speeds once you do this.

As to your query cache, make sure that it's on (on by default) and, based on 
your tables, either your MyISAM key_buffer_size or your InnoDB buffer_pool_size 
are correct for your server's amount of RAM.

HTH!

J.R.


From: Gmail User <[EMAIL PROTECTED]>
Sent: Tuesday, May 22, 2007 2:30 AM
To: mysql@lists.mysql.com
Subject: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks! 

I had perfectly working complex queries both with LEFT JOIN and without
and they were returning results in under a second. After upgrade to
5.0.x, the same queries would return results in 20-30 second range.

Through trial and error, I discovered that in case of SELECT ... FROM
table1, table2 ... ORDER BY table2.column will be very slow (10s of
seconds), while the same query ordered by table1.column will be in under
1 second. If I switch the order of tables in FROM, the same will hold
true in reverse order. Is this a bug or I missed something in my
googling? More importantly, what can I do change this--I need those
sorts! :-(

I got same results with 5.0.22, 5.0.27 (Linux).

TIA,

Ed

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Problem with compex query

2007-05-22 Thread Patricio A. Bruna
Friends, 
im having troubles with the following query: 

--- 
INSERT drp_volumen_venta_diaria_deposito (drp_id_sku, drp_id_deposito, 
drp_volumen, drp_es_cct, drp_dia, drp_mes, drp_anno, drp_fecha_dia_cargado ) 
SELECT DISTINCT s.drp_id_sku, d.drp_id_deposito, vv.drp_volumen, 1, 
vv.drp_dia, vv.drp_mes, vv.drp_anno,vv.drp_fecha_dia_cargado 
FROM drp_volumen_venta_diaria_tmp vv, drp_sku s, drp_deposito d, 
drp_mapeo_sku ms, drp_mapeo_deposito md 
WHERE vv.drp_codigo_sku = ms.drp_cod_sku_odyssey AND ms.drp_cod_sku_sap = 
s.drp_codigo_sku 
AND REPLACE(UCASE(TRIM(vv.drp_codigo_deposito)),' ','')= 
REPLACE(UCASE(TRIM(md.drp_alias_deposito_odyssey)),' ','') AND 
REPLACE(UCASE(TRIM(md.drp_alias_deposito_sap)),' ','') = 
REPLACE(UCASE(TRIM(d.drp_alias_deposito)),' ','') 
AND 
CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),vv.drp_mes) 
, if(LENGTH(vv.drp_dia)=1,CONCAT('0',vv.drp_dia),vv.drp_dia) ) >= ' 20070501 ' 
AND 
CONCAT(vv.drp_anno,if(LENGTH(vv.drp_mes)=1,CONCAT('0',vv.drp_mes),vv.drp_mes) 
, if(LENGTH(vv.drp_dia)=1,CONCAT('0',vv.drp_dia),vv.drp_dia) ) <= ' 20070515 '; 

- 

I run this query in 2 servers, devel and production, which have the same data. 
I run the query in devel without problems, but in production is not working and 
give me this error: 

" The SELECT would examine more rows than MAX_JOIN_SIZE. Check your WHERE and 
use SET SQL_BIG_SELECTS=1 or SET SQL_MAX_JOIN_SIZE=# if the SELECT is ok " 

The value of MAX_JOIN_SIZE is set at: 4294967295 and SET_SQL_BIG_SELECTS is 1. 
MySQL version is 4.0.18 over Red Hat 3. 


any idea why this isnt working? 

thanks 

5.1 release date

2007-05-22 Thread Olaf Stein
Hi all,

Are there any projections as to when mysql 5.1 will be released?

Thanks
Olaf


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



design choice - quite many tables

2007-05-22 Thread Przemysław Klein

Hi All.

I'm working on quite big database. It consists of about 200 tables. 
Additionaly about 50 tables are per year (because of annual data). It 
means every year new 50 tables will have to appear in application. And 
now I have a question. Should I use separate databases for "annual" data 
(i.e. db2006, db2007, etc...) (i don't need constraints on that (annual) 
tables) or put all the tables in one database? Is there any way to 
'catalogue'/organize tables within one database (namespace/schema)? 


Any thoughts?
Thanks in advance...

--
_/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/ _/
Przemek Klein ([EMAIL PROTECTED])


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: string to timestamp conversion

2007-05-22 Thread Mogens Melander
Like:

  str_to_date('Thu May 17 09:15:47 2007','%a %b %e %T %Y')

On Mon, May 21, 2007 21:10, [EMAIL PROTECTED] wrote:
> Have you considered using the string to time function?
>
>
> Sent via BlackBerry from T-Mobile
>
> -Original Message-
> From: "Bryan Cantwell" <[EMAIL PROTECTED]>
> Date: Mon, 21 May 2007 12:08:11
> To:"MySQL General" 
> Subject: string to timestamp conversion
>
> I have a table with a varchar column that contains a timestamp like
> this: 'Thu May 17 09:15:47 2007'
> I need to grab this and include it in an insert sql that puts that value
> in a table as a timestamp...
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>
> --
> This message has been scanned for viruses and
> dangerous content by OpenProtect(http://www.openprotect.com), and is
> believed to be clean.
>
>


-- 
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



-- 
This message has been scanned for viruses and
dangerous content by OpenProtect(http://www.openprotect.com), and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]