Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

hi david..  thanks...

i've done this many times and yes either trough php, perl, python or on
the mysql cl client. but my question here is not about doing it and insert
times it's more about hosting it and query times. i currently have a
working table for the same purpose with about 1.5 million records in it.
and the thing runs smoot on a machine that is 4 years old with 1 gig of
ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
this x 100 times. more or less. i'm not worried about the insert times -
this happens only ones and for a million entries, depending on what
technic is used, it takes no longer than a few minutes.
what i was asking basically was somebody to share experience with running
a server with that amount of records in one table.

currently the table i have has a size of 65 mgb which by 100 is about 6600
mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
successfully use a table like that. either that or cluster 2 machines with
4 gigs each and split the table. does this sound reasonable? is my logic
flawed somehow?

i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.  I
 can't speak for insert times, though, especially when keys are involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from the
 command line) that you should be able to hack down.  It should answer your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalone/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/usrs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev
 I guess a lot of that depends what an acceptable query execution time for
 you is.

well...  i don't really know. 30 secs maximum?! i've never worked with
such huge tables. 3 - 5 million records is fine but i've never worked on a
db with a table with 100 000 000 records.


 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in for
 your queries, etc?

that would be installed on a separate machine that might run only that
project. so yea there will be queries to other tables but only after the
result of the 99 million table returns.
there are no join requests with the 99 m table.

my calculation was mostly based on resources - like ram. like i mentioned
earlier the .MYD and .MYI files together on the current one that i have -
which has about 1.2 million records - are 90 mgb.
are the .MYI files kept in ram or both .MYD and .MYI?

multiplying 90x100 is what the size of the MYI + MYD will be, right?
is that all living in ram?

thanks




 Olaf


 On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave









-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

thanks j.r...


   Olaf is right.

 It is really more about query execution time, and more importantly, QUERY
 OPTIMIZATION.

 Depending on how you setup your keys, your table type, and what else your
 server does, you should be able to run multiple queries on this table
 without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?



 2 BIG suggestions --

 1) Whatever query you want to run on this table, run EXPLAIN. Then study
 the results and do your optimization and key creation.

ok...


 2) QUERY_CACHE. This is where you are going to live or die. Since you said
 you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
 QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...


 HTH!

me too..  thanks.


 J.R.

 

   From: Olaf Stein [EMAIL PROTECTED]
 Sent: Tuesday, June 12, 2007 8:13 AM
 To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED]
 Subject: Re: maximum number of records in a table

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 Olaf

 On 6/12/07 3:24 AM, kalin mintchev  wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev  wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself

Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev

thanks j.r...


   Olaf is right.

 It is really more about query execution time, and more importantly, QUERY
 OPTIMIZATION.

 Depending on how you setup your keys, your table type, and what else your
 server does, you should be able to run multiple queries on this table
 without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?



 2 BIG suggestions --

 1) Whatever query you want to run on this table, run EXPLAIN. Then study
 the results and do your optimization and key creation.

ok...


 2) QUERY_CACHE. This is where you are going to live or die. Since you said
 you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
 QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...


 HTH!

me too..  thanks.


 J.R.

 

   From: Olaf Stein [EMAIL PROTECTED]
 Sent: Tuesday, June 12, 2007 8:13 AM
 To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED]
 Subject: Re: maximum number of records in a table

 I guess a lot of that depends what an acceptable query execution time for
 you is.
 Also, what else does the machine do, are there other databases or tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 Olaf

 On 6/12/07 3:24 AM, kalin mintchev  wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev  wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself

Re: maximum number of records in a table

2007-06-12 Thread kalin mintchev
 You should be fine. 100 000 000 is not that much.
 Just make sure you set the right keys (and then query by them) on the
 table
 and even more importantly, set caches and buffers to utilize your RAM
 appropriately.

thanks.



 Olaf


 On 6/12/07 11:09 AM, kalin mintchev [EMAIL PROTECTED] wrote:

 I guess a lot of that depends what an acceptable query execution time
 for
 you is.

 well...  i don't really know. 30 secs maximum?! i've never worked with
 such huge tables. 3 - 5 million records is fine but i've never worked on
 a
 db with a table with 100 000 000 records.


 Also, what else does the machine do, are there other databases or
 tables
 that are queried at the same time, do you have to join other tables in
 for
 your queries, etc?

 that would be installed on a separate machine that might run only that
 project. so yea there will be queries to other tables but only after the
 result of the 99 million table returns.
 there are no join requests with the 99 m table.

 my calculation was mostly based on resources - like ram. like i
 mentioned
 earlier the .MYD and .MYI files together on the current one that i have
 -
 which has about 1.2 million records - are 90 mgb.
 are the .MYI files kept in ram or both .MYD and .MYI?

 multiplying 90x100 is what the size of the MYI + MYD will be, right?
 is that all living in ram?

 thanks




 Olaf


 On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote:


 hi david..  thanks...

 i've done this many times and yes either trough php, perl, python or
 on
 the mysql cl client. but my question here is not about doing it and
 insert
 times it's more about hosting it and query times. i currently have a
 working table for the same purpose with about 1.5 million records in
 it.
 and the thing runs smoot on a machine that is 4 years old with 1 gig
 of
 ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
 this x 100 times. more or less. i'm not worried about the insert times
 -
 this happens only ones and for a million entries, depending on what
 technic is used, it takes no longer than a few minutes.
 what i was asking basically was somebody to share experience with
 running
 a server with that amount of records in one table.

 currently the table i have has a size of 65 mgb which by 100 is about
 6600
 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram
 to
 successfully use a table like that. either that or cluster 2 machines
 with
 4 gigs each and split the table. does this sound reasonable? is my
 logic
 flawed somehow?

 i'll appreciate any comments on this subject    thanks...



 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a
 machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with
 managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 99 million isn't that large of a number.

 If you key the database properly, search times should be very modest.
 I
 can't speak for insert times, though, especially when keys are
 involved.

 This kind of thing is easy enough to do in your favorite scripting
 language.  I would just create a table with a few keys and just
 for($i=0;
 $i9900; $i++) it with random numbers.

 If you have PHP on your system, here is some PHP code (runnable from
 the
 command line) that you should be able to hack down.  It should answer
 your
 immediate questions about which PHP statements to use (if you've
 never
 done
 this from PHP before):

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standa
 lo
 ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup

 http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib
 /u
 srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup

 Near the end of it, especially if the software writes output, you
 should
 get
 an intuitive feel for how long each INSERT is taking.

 You can even do test queries using the barebones MySQL client ... you
 should
 see interactively how long a query takes.

 I would ALMOST do this for you, but it is just beyond the threshold
 of
 what
 I'd do because I'm bored and watching TV.  I'm just a little curious
 myself.  I've never messed with a table about 10,000 rows or so.

 Dave














-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql

maximum number of records in a table

2007-06-11 Thread kalin mintchev
hi all...

from http://dev.mysql.com/doc/refman/5.0/en/features.html:

Handles large databases. We use MySQL Server with databases that contain
50 million records. We also know of users who use MySQL Server with 60,000
tables and about 5,000,000,000 rows.

that's cool but i assume this is distributed over a few machines...

we have a new client that needs a table with 99 000 000 rows, 2 -3 columns.
i was just wondering if i have a two dual core 2 processors in a machine
with 4 gigs of ram - is that enough to host and serve queries from a table
of this size?
a few tables on the same machine?
more than one machine?
what are the query times like?

can somebody please share some/any experience s/he has/had with managing
databases/tables with that amount of records. i'd really appreciate it...


thanks a lot


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: maximum number of records in a table

2007-06-11 Thread kalin mintchev
 The answer depends upon the actual queries and/or how much data is being
 returned.

there is ALWAYS only one record found/returned per query  it's like
looking up one unique id it's like checking if number 5893786 is there
in row number one...  or something like number 5893786 - a unique id...



 trying to answer only based upon row count is NONSENSICAL, IMO.


 On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote:

 hi all...

 from http://dev.mysql.com/doc/refman/5.0/en/features.html:

 Handles large databases. We use MySQL Server with databases that
 contain
 50 million records. We also know of users who use MySQL Server with
 60,000
 tables and about 5,000,000,000 rows.

 that's cool but i assume this is distributed over a few machines...

 we have a new client that needs a table with 99 000 000 rows, 2 -3
 columns.
 i was just wondering if i have a two dual core 2 processors in a machine
 with 4 gigs of ram - is that enough to host and serve queries from a
 table
 of this size?
 a few tables on the same machine?
 more than one machine?
 what are the query times like?

 can somebody please share some/any experience s/he has/had with managing
 databases/tables with that amount of records. i'd really appreciate
 it...


 thanks a lot


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-07 Thread kalin mintchev


 Search speeds and CPU with MyISAM is quite good. I tried InnoDb and insert
 speeds was far too slow because of its row locking versus MyISAM's table
 locking. Some people have been able to fine tune InnoDb but it requires
 even more RAM because InnoDb works best when the entire table fits into
 memory.

thanks...



 Mike





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread kalin mintchev

thanks...  my question was more like IF mysql can handle that amount of
records - about 100 million...  and if it's just a question of cpu power
and  memory?


 Hi,

 The limit for the table can be set when you create the table itself.
 the MAX_ROWS and AVG_ROW_LENGTH variables (m X n matrix) will decide the
 table size.

 MAX_ROWS limts the maximum number of rows in that table.  The
 AVG_ROW_LENGTH
 variable decides the length of the row.  The specified value can be used
 by
 a single column itself or depends on the size of the columns.

 Thanks
 ViSolve DB Team.
 - Original Message -
 From: kalin mintchev [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Monday, February 05, 2007 9:14 AM
 Subject: mysql limits


 hi all...

 i just wanted to ask here if somebody has experience in pushing the
 mysql
 limits...  i might have a job that needs to have a table (or a few
 tables)
 holding about a 100 million records. that's a lot of records is
 there
 any limitation of some kind that wouldn;t allow mysql to handle that
 kind
 of amounts or it all depends on memory and cpu... or how are the
 searches
 - speed and otherwise - affected by such numbers?

 thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread kalin mintchev

 Put as much memory in the machine as possible. Building indexes for a
 table
 of that size will consume a lot of memory and if you don't have enough
 memory, building the index will be done on the hard disk where it is 100x
 slower. I've had 100M row tables without too much problem. However when I
 tried 500M rows the indexes could not be built (took days) because I too
 little RAM.

thanks  would you please be more specific about to little RAM? what
amount of memory is enough for the 500M? what about search speeds? cpu?
also what kind of tables did you use?

thanks







 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql limits

2007-02-04 Thread kalin mintchev
hi all...

i just wanted to ask here if somebody has experience in pushing the mysql
limits...  i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to handle that kind
of amounts or it all depends on memory and cpu... or how are the searches
- speed and otherwise - affected by such numbers?

thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



group by problem

2006-10-16 Thread kalin mintchev

 hi all...

 i have an issue with group by and ordering. apparently group by ignores
'order by id DESC'?!

 an example is a table that has an id and a category fields. there are a
few categories under which records can be filed. so what i want is the
latest record from each category by doing something like:

select id, name from table group by category order by id DESC;

this doesn;t work - it shows me the first record under each category - not
the latest as specified by DESC?! something is wrong.

i tried 'distinct' but that 'distincts' on all fields in the query?!?! 
whats the point of distinct if it can not distincts between fields?! in:

select distinct category, id, name from table order by id DESC;

this query distincts on all category, id and name when it should distinct
only on category

how do i do that without temporary tables?

thanks...



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: group by problem

2006-10-16 Thread kalin mintchev


 Basically you can't do what you want either without temporary tables or
 using a subselect. Subselects are only available in mysql 4.1+ (I think
 - check the docs) so that may or may not be an option.

thanks... pardon my ignorance - how would i do that using subselects?






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: group by problem

2006-10-16 Thread kalin mintchev


 but that will get you all records for that category not just the most
 recently updated.


that's the main problem, isn't it?

what i'm looking for is the last record for EACH of the categories in the
table. i'm aware of the aformentioned options.

my problem with group by is that ignores the DESC in order by. if i do:

select id from table group by category;

it does the same as if i do:

select id from table group by category order by id desc;

both queries return the FIRST found record for EACH category in ascending
order - the order they were indexed in. i geuss i still don't get why if i
request 'order by id desc' the 'group by' doesn't look 'backwards'.
obviuosly the 'grouping' comes before the 'ordering' in the query - it
probably is executed that way too. why can't it be the other way around
taking in consideration the request of ordering the results in descending
order and then grouping them by category

i guess the answer is - just because...




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



select random ids from list

2006-07-31 Thread kalin mintchev

 hi all...

 how can i do a query on a list of ids without doing individual queries
for each one of them?

 something like:
 select id,title from content where id = 100,106,109;

 of course this doesn't work...  is it possible somehow?


 thanks




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select random ids from list

2006-07-31 Thread kalin mintchev

 select id,title from content where id IN(10,20,30);

cool thanks..  what if i want it to be random ordering like:

select id,title from content where id IN(20,10,30);

and i would like the order of the ids and the in() to be kept in the
result array. what 'order by' would i use?!






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



binary select - case sensitive

2006-07-05 Thread kalin mintchev
hi all...

i found this on the mysql dev manual site:
MySQL 4 and later string comparisons, including DISTINCT, aren't case
sensitive unless the field is declared as BINARY or you use BINARY in your
comparison.

so here i tried it but no good. any ideas?! the field is not declared
binary but i do have 'binery' in my select. is it in the wrong place?!

thanks

mysql select binary code from promo_codes where code = USAW2umHv7De;
+--+
| binary code  |
+--+
| USAW2umHv7De |
+--+
1 row in set (0.02 sec)

mysql select binary code from promo_codes where code = USAW2umHv7DE;
+--+
| binary code  |
+--+
| USAW2umHv7De |
+--+
1 row in set (0.00 sec)



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: binary select - case sensitive

2006-07-05 Thread kalin mintchev

 http://dev.mysql.com/doc/refman/5.1/en/charset-binary-op.html

 It has some good examples.

got it thanks...






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql for freebsd 6.0

2006-03-22 Thread kalin mintchev

  hi all...

  i can't see the mysql 5 version for freebsd 6.0 on the mysql developer
site?
  am i blind or it's on purpose?!?!

  curious...  and actually need it...

  thanks...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



data backup

2006-03-05 Thread kalin mintchev

 hi all...

what's the best way to periodically back up mysql data?
so that databases and tables can be still usable even after a mysql upgrade?

thanks...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



simple load query

2006-01-17 Thread kalin mintchev


  hi all...

  whats the problem with this:
load data infile 'stores.txt' into table useyourcash_sports_us (chain,
store, address1, address2, city, state, zip,web) FIELDS TERMINATED BY '\t'

i get:
 You have an error in your SQL syntax; check the manual that corresponds
to your MySQL server version for the right syntax to use near 'FIELDS
TERMINATED BY '\t'' at line 1

why feilds term...  is wrong there?

thanks a lot...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: is this normal?

2005-12-07 Thread kalin mintchev
 Sorry I first misread yor question (I hope I cancelled prev. post on
time). I thought you have problems connecting to mysql while actual
problem is: Why it is so easy for everyone (even from outside) to
connect?
 Yes this is normal: default user is root *without password*
 if you wan't to secure your installation... well just read the manual :-)

 ok.. but i have root password set up. that works fine. if i'm as root and
start the client it complains if i don't indicate that i'll be using
password. but if y use _anything-else_ as user (i.e. -u stuff ) it just
lets me right through. the users are not real local system users. just
invented stuff. once i'm in thought i can not (apparently) do anything -
create or drop or anything like that - and don't see any of the databases.
but i still shouldn't even get to that point... by default...

my guess is that i have to set up a password for '' (empty user) @
localhost and '' (empty user) @ my.machine.name.xxx...  right?


 http://dev.mysql.com/doc/refman/5.0/en/default-privileges.html
 also
 http://dev.mysql.com/doc/refman/5.0/en/security.html
 HTH
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: is this normal?

2005-12-07 Thread kalin mintchev



 Yes you're right! you can safely delete those - but first make some
 changes... as it seems you've set up root password only for :
 [EMAIL PROTECTED] - right? that's why you can't login using
 localhost (mysql without -h host option assumes -h localhost and if you
 only changed password for [EMAIL PROTECTED] then password for
 [EMAIL PROTECTED] is probably still the same  default *no password* :)
   In that case to login with password you changed try (I assume mysql
 runs without --skip-netowrking) following instead:
 mysql -h my.machine.name.xxx -u root -p

thanks i already have that. fot both my.machine.xxx and localhost.. 
logging  in as root wasn't the issue. those where the first passwords i
did set up. i just freaked out for a minute there realizing that typing -u
some-stuff will get you in even if you can't do anything inside. i did try
select - not even that... and no, you can't see test - at least not on
this sql - 5.0.*

thanks again...

-- 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



is this normal?

2005-12-06 Thread kalin mintchev


  system freebsd 4.10

  mysql is binary: mysql-standard-5.0.15-freebsd4.7-i386

  after mysql installation and root password set up i can type:

  /usr/local/mysql/bin/mysql -u no_matter_what_user_here

  and i get right in. even if i type -p. i do get the password prompt but
doesn't matter what i type there - even nothing i get straight in...

  is this normal??

  thanks...



-- 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



data export

2005-04-11 Thread kalin mintchev


hi all...


is there a simple way to export data from mysql tables? like the opposite
of 'load data infile'? i was looking around in the manual but without
success...

thanks.

--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



insert

2005-01-07 Thread kalin mintchev
hi all...

i need some help. can somebody explain why is this insert/select not working?

insert into the_db (place,address,number) values(stuff,stuff2,select
this from that where one=two);

i tried ()s around the 'select' too but it didn't work.

thanks.

-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: insert

2005-01-07 Thread kalin mintchev

 insert/select has a different syntax than what you are trying to use.

 try

 insert into table_name (place, address, number) select stuff,
 stuff2, this from that where one=two

aha...  this is it. thank you...

and you can sellect more then one things from 'that':

insert into table_name (place, address, number,number_too) select
stuff,stuff2, this, this_too from that where one=two;

thanks...


 i may yet have the syntax wrong, but its close.


 kalin mintchev wrote:
 hi all...

 i need some help. can somebody explain why is this insert/select not
 working?

 insert into the_db (place,address,number) values(stuff,stuff2,select
 this from that where one=two);

 i tried ()s around the 'select' too but it didn't work.

 thanks.



 --
 Jeff Mathis, Ph.D.505-955-1434
 Prediction Company[EMAIL PROTECTED]
 525 Camino de los Marquez, Ste 6  http://www.predict.com
 Santa Fe, NM 87505


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Deleted: 0 Skipped: 0 Warnings: a lot

2004-12-09 Thread kalin mintchev
hi all...

where can i see what these warnings are. i get them when i do load data
infile.
i was looking for some kind of log under /var/logs/ or mysql/var/log but
there isn't a log file for mysql at any of those locations. and the
mysql/var/ doesn't exist at all.
i created it and redid the load but still no log file..

is there a way to read the binary log files...

thanks...



--


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Deleted: 0 Skipped: 0 Warnings: a lot

2004-12-09 Thread kalin mintchev
 Command is SHOW WARNINGS;

 You can run this from either the command line or from the MySQL CC SQL
viewer.

thanks..  but from the command line i get:
Check the manual that corresponds to your MySQL server version for the
right syntax to use near 'warnings' at line 1

is mysql-standard-4.0.13 to old for show warnings!?


(J.R., sorry about the double post...)




 J.R.

 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 09, 2004 6:53 PM
 To: [EMAIL PROTECTED]
 Subject: Deleted: 0 Skipped: 0 Warnings: a lot

 hi all...

 where can i see what these warnings are. i get them when i do load data
infile.
 i was looking for some kind of log under /var/logs/ or mysql/var/log but
there isn't a log file for
 mysql at any of those locations. and the mysql/var/ doesn't exist at
all.
 i created it and redid the load but still no log file..

 is there a way to read the binary log files...

 thanks...



 --


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Deleted: 0 Skipped: 0 Warnings: a lot

2004-12-09 Thread kalin mintchev

 I have never had a problem running that command, however you do have to
 run it right after you run
 your query (or LOAD DATA). That may be why you can't use it.

that's what i did. here, this is straight from the cl of the mysql client:

Query OK, 415 rows affected (0.01 sec)
Records: 415  Deleted: 0  Skipped: 0  Warnings: 125

mysql show warnings;
ERROR 1064: You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use near
'warnings' at line 1



 Also, try looking under HOSTNAME.ERR instead of a log file. That might
 help out.

there is no such file on this machine...


thanks


 J.R.

 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 09, 2004 7:29 PM
 To: J.R. Bullington
 Subject: RE: Deleted: 0 Skipped: 0 Warnings: a lot


 Command is SHOW WARNINGS;

 You can run this from either the command line or from the MySQL CC SQL
 viewer.

 thanks..  but from the command line i get:
 Check the manual that corresponds to your MySQL server version for the
 right syntax to use near
 'warnings' at line 1

 is mysql-standard-4.0.13 to old for show warnings!?




 J.R.

 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 09, 2004 6:53 PM
 To: [EMAIL PROTECTED]
 Subject: Deleted: 0 Skipped: 0 Warnings: a lot

 hi all...

 where can i see what these warnings are. i get them when i do load
 data infile.
 i was looking for some kind of log under /var/logs/ or mysql/var/log
 but there isn't a log file for mysql at any of those locations. and
 the mysql/var/ doesn't exist at all.
 i created it and redid the load but still no log file..

 is there a way to read the binary log files...

 thanks...



 --


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




 --



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Deleted: 0 Skipped: 0 Warnings: a lot

2004-12-09 Thread kalin mintchev
ok...  i just saw this in the manual:

This command is implemented in MySQL 4.1.0

that's why it doesn't work... mine is 4.0.13.
so what do people that have earlier versions then 4.1 do to see the
warnings?!



 I have never had a problem running that command, however you do have to
 run it right after you run
 your query (or LOAD DATA). That may be why you can't use it.

 Also, try looking under HOSTNAME.ERR instead of a log file. That might
 help out.

 J.R.

 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 09, 2004 7:29 PM
 To: J.R. Bullington
 Subject: RE: Deleted: 0 Skipped: 0 Warnings: a lot


 Command is SHOW WARNINGS;

 You can run this from either the command line or from the MySQL CC SQL
 viewer.

 thanks..  but from the command line i get:
 Check the manual that corresponds to your MySQL server version for the
 right syntax to use near
 'warnings' at line 1

 is mysql-standard-4.0.13 to old for show warnings!?




 J.R.

 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 09, 2004 6:53 PM
 To: [EMAIL PROTECTED]
 Subject: Deleted: 0 Skipped: 0 Warnings: a lot

 hi all...

 where can i see what these warnings are. i get them when i do load
 data infile.
 i was looking for some kind of log under /var/logs/ or mysql/var/log
 but there isn't a log file for mysql at any of those locations. and
 the mysql/var/ doesn't exist at all.
 i created it and redid the load but still no log file..

 is there a way to read the binary log files...

 thanks...



 --


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




 --





-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax question..

2004-11-15 Thread kalin mintchev


 - Original Message -
 From: kalin mintchev [EMAIL PROTECTED]
 To: Adam [EMAIL PROTECTED]
 Cc: MySQL General [EMAIL PROTECTED]
 Sent: Sunday, November 14, 2004 11:33 PM
 Subject: Re: syntax question..



  Kalin,
 
  Kalin thanks to all... yes, Rhino - i'm new. my first
  Kalin post. the version is the problem indeed. it's
  Kalin 4.0.13-standard. how would i achieve the same
  Kalin query in this version?...
 
  You'll need to provide the following:
 
  (1) What is the result you want to achieve?
 well:
 this is what i wrote:
 select t1.data from table1 as t1 where t1.zip=(select * from table2 as
 t2
 where t2.chain like %carmike%);

 i guess this will make more sense:
 basically i'm looking for the data in t1.data - which is varchar - that
 has a zip  value of the record(s)  that contain the name carmike in
 their chain field in table2...

 does this make sense?


 Not entirely. It sounds like you are saying that the zip value is in the
 column called 'data'. From your query, it appears that the zip code is in
 the column called 'zip'. I *think* you mean to say that for every row in
 table1 that contains a give the same zip code as the zip code of
 '%carmike%', you want to see some other information in the same row, which
 you are calling 'data' in your example.

 There are probably a few different ways to do this query. I'm going to
 suggest one possible approach but it is not necessarily the best one;
 others
 may come along later to suggest a better approach.

 I think you need to do something like this:

 select t1.data
 from table1 t1 inner join table2 t2 on t1.zip = t2.zip
 where t2.chain like '%carmike%';

i tried this one...  it seams to work. i'll read a bit more about joins ..
 thank you... appreciate it...



 You haven't provided full descriptions of the tables and their columns and
 I
 have no idea what the data in the tables actually is so I can't try this
 for
 you to be sure it works. However, it should be pretty close to what you
 need. Give it a try and let us know what error messages you get. If you
 don't get error messages but the result is incorrect, describe the result
 that you got and how it differed from the result you expected. Then we can
 probably help you figure out what went wrong.

 Rhino




-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



syntax question..

2004-11-14 Thread kalin mintchev
hi everybody..

can somebody please explain what is wrong with this command:
select t1.data from table1 as t1 where t1.zip=(select * from table2 as t2
where t2.chain like %carmike%);


thank you...


-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: syntax question..

2004-11-14 Thread kalin mintchev
thanks to all... yes, Rhino - i'm new. my first post.
the version is the problem indeed. it's 4.0.13-standard.
how would i achieve the same query in this version?...

thank you..


 Hi
 sub selects are only supported from MySql 4.1 onwards, so it may be invalid
 if you have an earlier version.
 you may also cause a conflict by using the database alias (t1) as the
name
 of the result
 Peter
 -Original Message-
 From: kalin mintchev [mailto:[EMAIL PROTECTED]
 Sent: 14 November 2004 23:06
 To: [EMAIL PROTECTED]
 Subject: syntax question..
 hi everybody..
 can somebody please explain what is wrong with this command:
 select t1.data from table1 as t1 where t1.zip=(select * from table2 as t2
 where t2.chain like %carmike%);
 thank you...
 --
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: syntax question..

2004-11-14 Thread kalin mintchev

 Kalin,

 Kalin thanks to all... yes, Rhino - i'm new. my first
 Kalin post. the version is the problem indeed. it's
 Kalin 4.0.13-standard. how would i achieve the same
 Kalin query in this version?...

 You'll need to provide the following:

 (1) What is the result you want to achieve?
well:
this is what i wrote:
select t1.data from table1 as t1 where t1.zip=(select * from table2 as t2
where t2.chain like %carmike%);

i guess this will make more sense:
basically i'm looking for the data in t1.data - which is varchar - that
has a zip  value of the record(s)  that contain the name carmike in
their chain field in table2...

does this make sense?



 (2) What are the details of the tables in the join
 (column names  data types)?

 Regards,
 Adam





-- 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



terminated by '\r'....

2003-07-04 Thread Kalin Mintchev
hi all,

load data infile doesn't work with escaped by '\\'   line 
terminated by '\r'
the problem is the '\r' . doesn't work with '\r\n' either.
if i replace the \r with \n in the the text file itself then it works 
fine the files i'm trying to load are tab delimited txt files generated 
by xcel...
they dont have '\r\n' at the and of the line - all they have is '\r'...
what should i specify as the end of the line to make it work as a mysql 
command without using sed

thanks..

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


cant grant privileges?!

2003-06-03 Thread Kalin Mintchev

hi everybody...

i just installed mysql 4.0.13 on a redhat linux 8.0
when i create a database as root and then grant privileges to another user
like this:
GRANT ALL ON somedb.* TO [EMAIL PROTECTED] IDENTIFIED BY passwd;

everything seems to be fine but then if i log in as user and try to get
to the mysql client on the command line i get this:
ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password:NO)


if as root i su - user i can get to the mysql prompt but can't see
somedb...

it's not the first time i install mysql but this is the first time i get
into this problems...

what is the problem? please help...

thanks...




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: cant grant privileges?!

2003-06-03 Thread Kalin Mintchev
On Tue, 3 Jun 2003, Nils Valentin wrote:

 How do you login to mysql ?

 mysql -u user -p -h localhost

 should do it.

cool...  thanks.  BUT it doesn't work within a php script?!
it worked on the command line...  any idea why not within php?!

thanks


 You dont even need -h localhost in this case.

 Best regards

 Nils Valentin
 Tokyo/Japan



 2003年 6月 3日 火曜日 11:31、Kalin Mintchev さんは書きました:
  hi everybody...
 
  i just installed mysql 4.0.13 on a redhat linux 8.0
  when i create a database as root and then grant privileges to another user
  like this:
  GRANT ALL ON somedb.* TO [EMAIL PROTECTED] IDENTIFIED BY passwd;
 
  everything seems to be fine but then if i log in as user and try to get
  to the mysql client on the command line i get this:
  ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password:NO)
 
 
  if as root i su - user i can get to the mysql prompt but can't see
  somedb...
 
  it's not the first time i install mysql but this is the first time i get
  into this problems...
 
  what is the problem? please help...
 
  thanks...




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]