Re: Spreadsheet Inserting Double Quotes

2004-06-12 Thread Jigal van Hemert
From: "David Blomstrom" <[EMAIL PROTECTED]>
> --- Chris W <[EMAIL PROTECTED]> wrote:
> > If you plan on using this process again, you may
> > want to track down the
> > source of the problem,  but if you aren't going to

The "source of the problem" is the fact that it is common in a CSV file to
represent a quote inside quoted data by doubling it. The MySQL import gives
you the option to define an escape character, but in this case adding an
ESCAPED BY '"' doesn't really help at all.

> > be exporting this way
> > again and you just want to fix it, I would just open
> > the csv file in a
> > text editor and do a search and replace.
>
> That's a good question. I'd like to export as little
> as possible, but I'll probably be making constant
> changers to my database. That means I'll have have to
> modify my spreadsheet (my master record), then save it
> as a csv file, then import it into my database.

You can solve it by either doing a search and replace in the CSV or
performing an update query that replaces "" by " in MySQL.

> I finally broke down and purchased Navicat, which can
> apparently export from Excel to MySQL. However, I
> don't have Excel; I have Microsoft Works. And I hate
> to buy Excel when I'm planning on getting StarOffice -
> but will StarOffice work with Navicat and MySQL?

StarOffice and OpenOffice.org (the Open Source version of StarOffice) both
have the option to communicate with MySQL built in (plus the option to
access any database by using ODBC)! It's a bit hidden inside the program,
but there is a database front end in StarOffice(SO)/OpenOffice.org(OOo) that
resembles MS Access, so you can directly manipulate MySQL records from
SO/OOo.
You might want to consider getting OpenOffice.org (which is free) since you
don't need the database that's included in SO (you'll be using MySQL) and
you may manage to do without the third party stuff (import/export filters,
etc.) which are not included in OOo.
If you want more info on OOo working with MySQL you can join one of the many
e-mail lists (there's probably one in your own language) (see
http://www.openoffice.org/mail_list.html and e.g.
http://da.openoffice.org/servlets/ProjectMailingListList for the Danish
mailing lists, etc.). There are many people on such a list to help you on
your way and they can probably provide you with tutorials, etc.

I think that you'll find that it is possible for programs to co-operate,
especially when both are OpenSource ;-)

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Spreadsheet Inserting Double Quotes

2004-06-12 Thread David Blomstrom
--- Chris W <[EMAIL PROTECTED]> wrote:
> If you plan on using this process again, you may
> want to track down the 
> source of the problem,  but if you aren't going to
> be exporting this way 
> again and you just want to fix it, I would just open
> the csv file in a 
> text editor and do a search and replace.

That's a good question. I'd like to export as little
as possible, but I'll probably be making constant
changers to my database. That means I'll have have to
modify my spreadsheet (my master record), then save it
as a csv file, then import it into my database.

But once I get organized, I imagine my database could
and should become my master record, and I can always
reverse the procedure and save it as a spreadsheet
file. Also, I want to learn how to import directly
from a spreadsheet into a database.

I finally broke down and purchased Navicat, which can
apparently export from Excel to MySQL. However, I
don't have Excel; I have Microsoft Works. And I hate
to buy Excel when I'm planning on getting StarOffice -
but will StarOffice work with Navicat and MySQL?

Welcome to the world of compatibility!





__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Spreadsheet Inserting Double Quotes

2004-06-12 Thread Chris W
David Blomstrom wrote:
I just saved a spreadsheet (Microsoft Works) as a csv
file and imported it into a database. It worked fine,
except for some links, which acquired double quotes
when I saved the file as a csv file. In other words,
they now look like this:
Geobop
Do you have any idea what might cause this? Is there a
way purge these double quotes in MySQL? I can do a
search and replace in my csv file, but it takes a long
time, as it's a huge file.
 

If you plan on using this process again, you may want to track down the 
source of the problem,  but if you aren't going to be exporting this way 
again and you just want to fix it, I would just open the csv file in a 
text editor and do a search and replace.

--
Chris W
Bring Back the HP 15C
http://hp15c.org
Not getting the gifts you want?  The Wish Zone can help.
http://thewishzone.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Spreadsheet Inserting Double Quotes

2004-06-12 Thread David Blomstrom
This isn't directly related to MySQL, but I thought
someone might have an answer - or maybe there's a way
MySQL can deal with it.

I just saved a spreadsheet (Microsoft Works) as a csv
file and imported it into a database. It worked fine,
except for some links, which acquired double quotes
when I saved the file as a csv file. In other words,
they now look like this:

Geobop

Do you have any idea what might cause this? Is there a
way purge these double quotes in MySQL? I can do a
search and replace in my csv file, but it takes a long
time, as it's a huge file.

Thanks.




__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: "not unique table/alias"

2004-06-12 Thread Robert A. Rosenberg
At 14:40 -0400 on 06/07/2004, Jack Tanner wrote about "not unique table/alias":
I have two complex subqueries that I need to join. I suspect this
problem is due to using aliases instead of table names, but I don't
know how to work around it (temporary tables?). Please help.
SELECT col FROM (subquery-1) AS t1, (subquery-2) AS t2
JOIN t1 ON t2.col = t1.col;
Not unique table/alias: 't1'.
SELECT col FROM (subquery-1) AS t1, (subquery-2) AS t2
WHERE t2.col = t1.col;
Works fine! (But this query is different, because it doesn't get
the rows that are NULL in one of the tables.)
Using MySQL 4.1.2. Thanks in advance for your help.
I think it is screaming about a syntax error since you do not have 
two tables for the Join. Try:

SELECT col FROM (subquery-1) AS t1
JOIN (subquery-2) AS t2
ON t2.col = t1.col;
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Connecting MySQL in Panther

2004-06-12 Thread chueewowee
Hello, I'm new to the list and need help of course.
My problems are in connecting to MySQL. I am running Panther 10.38. , MySQL
4.018.
I have removed all old files before reinstalling MySQL. I have installed
the MySQL Startup Item and it is correctly located.
Output from the Terminal is as follows:

[raisinspace:~] chueewow% sudo /Library/StartupItems/MySQL/MySQL start
Starting MySQL database server
[raisinspace:~] chueewow% /usr/local/mysql/bin/mysql
ERROR 2002: Can't connect to local MySQL server through socket
'/tmp/mysql.sock' (2)
[raisinspace:~] chueewow% sudo /Library/StartupItems/MySQL/MySQL start
Starting MySQL database server
[raisinspace:~] chueewow% cd /usr/local/mysql
[raisinspace:/usr/local/mysql] chueewow% sudo ./bin/mysqld_safe &
[1] 2966
[raisinspace:/usr/local/mysql] chueewow% Starting mysqld daemon with
databases from /usr/local/mysql/data
040612 23:32:05  mysqld ended
Also, from my HTML browser (SAfari) when :
Warning: mysql_connect(): Can't connect to local MySQL server through
socket '/tmp/mysql.sock' (2) in /Users/chueewowee/Sites/index.php on line 21
Warning: mysql_select_db(): Can't connect to local MySQL server through
socket '/tmp/mysql.sock' (2) in /Users/chueewowee/Sites/index.php on line 25
Warning: mysql_select_db(): A link to the server could not be established
in /Users/chueewowee/Sites/index.php on line 25
Warning: mysql_create_db(): Can't connect to local MySQL server through
socket '/tmp/mysql.sock' (2) in /Users/chueewowee/Sites/index.php on line 29
Warning: mysql_create_db(): A link to the server could not be established
in /Users/chueewowee/Sites/index.php on line 29
2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock'
(2)
Warning: mysql_select_db(): Can't connect to local MySQL server through
socket '/tmp/mysql.sock' (2) in /Users/chueewowee/Sites/index.php on line 37
Warning: mysql_select_db(): A link to the server could not be established
in /Users/chueewowee/Sites/index.php on line 37
Can't Select CONTACT
Warning: mysql_query(): supplied argument is not a valid MySQL-Link
resource in /Users/chueewowee/Sites/index.php on line 43
2002: Can't connect to local MySQL server through socket '/tmp/mysql.sock'
(2)
I have sought and tried  all kinds of advice. But either it is not clear or
working for me.
JP
JP
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ER_PARSE_ERROR???

2004-06-12 Thread Michael Stassen
Your last row attempts to declare a column named NB of type TEXT with size 
of 5.  TEXT columns don't take a size specifier.  Either drop the size of 5 
or change NB to type CHAR or VARCHAR.

Michael
David Blomstrom wrote:
I'm trying to create a new table in my database. I
patterned it after another table, so I didn't expect
any errors. Yet I get this error:
#1064 - You have an error in your SQL syntax.  Check
the manual that corresponds to your MySQL server
version for the right syntax to use near '(5) DEFAULT
NULL) TYPE = InnoDB' at line 1
I don't have a clue what it's talking about. Error
#1064 is an "ER_PARSE_ERROR," which isn't much help.
I must have made a really basic, and presumably
obvious, mistake, but I can't identify it. Can you
spot it on the screenshot I put online at 
http://www.geoworld.org/basics.gif ?

Thanks.

	
		
__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


ER_PARSE_ERROR???

2004-06-12 Thread David Blomstrom
I'm trying to create a new table in my database. I
patterned it after another table, so I didn't expect
any errors. Yet I get this error:

#1064 - You have an error in your SQL syntax.  Check
the manual that corresponds to your MySQL server
version for the right syntax to use near '(5) DEFAULT
NULL) TYPE = InnoDB' at line 1

I don't have a clue what it's talking about. Error
#1064 is an "ER_PARSE_ERROR," which isn't much help.

I must have made a really basic, and presumably
obvious, mistake, but I can't identify it. Can you
spot it on the screenshot I put online at 
http://www.geoworld.org/basics.gif ?

Thanks.





__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Back Tick Quoted Identifiers

2004-06-12 Thread John Five
Hi, just wondering if anyone knows if using back ticks to quote indentifiers 
is in the SQL standard. I understand double quotes are meant to quote 
identifiers in the standard, but does the standard also say that back ticks 
can be used as well? I'm asking this to know if my back tick infested 
queries will work with other databases like PostgreSQL or will they only 
work with MySQL? Thanks.

_
Watch the online reality show Mixed Messages with a friend and enter to win 
a trip to NY 
http://www.msnmessenger-download.click-url.com/go/onm00200497ave/direct/01/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow query: Getting first initials from an entire table's data

2004-06-12 Thread Michael Stassen
TK wrote:
At 05:02 PM 6/12/2004 +0200, Harald Fuchs wrote:
Other DBMSs like PostgreSQL grok indexes on functional expressions;
MySQL doesn't.  Thus your only choice seems to be storing the
uppercased initial in a separate column and putting an index on that
column.

As I indicated, I already tried that it had no effect at all. MySQL still
scans the whole table, still doesn't use any indexes, and takes forever at it.
Nevertheless, Harald is correct.  MySQL cannot use an index on a column when 
you are feeding the column to a function, so your only hope of using an 
index is to add an initials column.

Your statement that this still doesn't work surprised me, so I put it to the 
test.  I made a table named inits with 50,000 randomly chosen initials in a 
column named init and indexed it.  Both

  SELECT DISTINCT init FROM inits ORDER BY init;
and
  SELECT init, COUNT(*) FROM inits GROUP BY INIT;
used the index and took .5 seconds on a 5 year old G3 iMac running mysql 
4.0.20.  If no index was used when you tried an indexed initials column, I 
conclude that something else in your query caused it.


I haven't been able to think of a way to do this efficiently.  My current query looks 
like this:
select DISTINCT UPPER(LEFT(n.Name,1)) as Initial
from Names n, Things t
where n.ID = t.ID
order by Initial desc

Even if I eliminate DISTINCT, or create a single character index on
Name, or create a whole field that just has the first character of
Name, I can't figure out how to get MySQL to not have to scan the
entire table.
I don't understand what you are trying to do here.  You join Names to Things 
on ID, but you aren't retrieving any data from Things?  Perhaps that's only 
part of the real query.

I see 3 problems with this query.
1. As already explained, with UPPER(LEFT(n.Name,1)), we can't use an index 
on Name.  Replace this with n.Init, where n.Init has the first initial and 
is indexed.

2. You are joining the 50,000 rows of Name to rows in Things, then throwing 
away approximately 95% of them with DISTINCT.  Indexes will help, but if you 
only have single column indexes, mysql will choose between the indexes on 
Name.ID and Name.init according to which ought to require looking at fewer 
rows (probably ID).  To really use an index to best advantage here, you'll 
need a multicolumn index on ID and init.  Assuming there's some reason to 
join to Things...

3. Mysql is bad at ORDER BY ... DESC.  The index works great for ASC but 
poorly for DESC.  For example, in my test queries above, mysql takes 10 
times longer if I sort init in descending order (which makes me think mysql 
is sorting too soon).  In my case,

  CREATE TEMPORARY TABLE initials SELECT DISTINCT init FROM inits;
  SELECT * FROM initials ORDER BY init DESC;
is almost as fast as my original, ascending order query.
If the query you quoted above isn't the real query, perhaps if you posted 
the real one someone on the list could suggest a workable alternative.

Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Stop query on first match

2004-06-12 Thread Michael Stassen
Emmanuel van der Meulen wrote:
On Monday, June 07, 2004 19:09, Michael Stassen wrote;
This doesn't quite make sense.  You seem to say that several rows
will match but then you say only one will.  It must be one or the 
other.  Perhaps I've misunderstood you.
Firstly, Michael thank you for your assistance; I'll elaborate.
Michael I'll attempt first without supplying the table definitions and
actual sql and etc..
Table 'a' contains a property which does not match any property on table 'b'
directly, but matches within a range.
Example;
table 'a' property 'num'
num
---
1000
2000
3000
table 'b' properties 'fromNum' and 'toNum'
   fromNum toNum
row 1.   1  1500
row 2.1501  2000
row 3.2001  4000
select... where 'num' between 'fromNum' and 'toNum'.
As seen in this example, using between only one row actually matches,
however the query engine would at first see more than one matching row i.e.,
What do you mean by "at first"?
with 'fromNum' (between is same as num >= fromNum), value 2000 from table a
would match row 1 & 2 on table b; and with 'toNum' (between is same as num
<= toNum), value 2000 from table a would match row 2 & 3 on table b; only
once the range is taken together the result matches one row, viz., row 2 on
table b.
This analysis is not quite right.  You seem to be saying that rows which 
match half of the condition will be retrieved, then filtered according to 
the other half of the condition, but that's not how it works.  If it did, 
BETWEEN would always be slow, but BETWEEN can be very fast, if the range is 
a constant and the column is indexed.  Assuming indexes on num, fromNum, and 
toNum, you should be able to verify that

  SELECT * FROM table_a WHERE num BETWEEN 1501 AND 2000;
  SELECT * FROM table_b WHERE fromNum BETWEEN 1 AND 3000;
  SELECT * FROM table_b WHERE toNum BETWEEN 1500 AND 4000;
are all very quick.  In your case, "num BETWEEN fromNum AND toNum" is the 
same as "num >= fromNum AND num <= toNum".  There is only one matching row. 
 The problem is finding it.

To find which rows in table b match, we have to look at each row and compare 
the values of num, fromNum, and toNum.  Indexes won't help much here, 
because each column's index is unlikely to narrow the list of possible 
matches much.  If the optimizer could put 2 indexes together, it might see 
that fromNum matches from row 23 up, while toNum matches from rows 23 down, 
so row 23 is it, but the optimizer doesn't combine indexes -- it chooses the 
best (most restrictive) one.  Even then, it only uses the index if it 
restricts us to no more than 30% of the rows.  Furthermore, the value of num 
changes with each row of table a.  The result is an unavoidable full table 
scan on table b.

In other words, the problem is not, strictly speaking, the BETWEEN.
Now please remember table b has 1.4 million rows as in this example, with
fromNum and toNum running consecutively, so the query takes 4 seconds to
find a row in table b.  With say 200 rows in table a, that means the query
runs for a long time.
That's 4 seconds to scan all of table b, rather than 4 seconds to find one 
row.  A subtle but important distinction.

Is that right?  You have 1.4 million ranges?  Wow!  Up till now, I've been 
assuming you had a lot of data in table a to be assigned one of a relatively 
few ranges from table b.  Apparently, I've been picturing it backwards.

I'm not sure there's a way to improve this specific query, but there may be 
another way to accomplish the same thing which works better.  It's hard to 
say what that might be without a better picture of what you're doing than I 
currently have.  Perhaps if you described your data someone could make a 
suggestion.  I'm still assuming table b must be static, right?  Otherwise 
you couldn't guarantee uniqueness of ranges, I think.  On the other hand, 
different rows in table a could be in the same range (match the same row of 
table b), right?

What I did was to use limit 1, and ran query with 1 row, this took .01
second.
I assume by "ran query with 1 row" you mean one row from table a.  Now num 
is effectively a constant, and LIMIT 1 short circuits the full table scan as 
soon as one match is found.

You also seem to imply that with BETWEEN you get a full
table scan even though there is only one match for each row.  That sounds
like an indexing problem, but it is hard to say without more information.
Indexed individually on;
fromNum
toNum
Also experimented by adding combining index on;
fromNum/toNum
Either way no difference, query runs 4 seconds.
It would help us help you if you at least posted the query and the results
of EXPLAIN.  It would probably also help if you told us more about the
tables, perhaps with SHOW CREATE TABLE.
If above does not help you, I'll bring the whole lot to the post.
Well, the more info, the better.  At least describe the data a little more. 
 Depending on exactly what you want, there may be a way to change the 
process to make this work at an acceptable speed.

Re: Slow query: Getting first initials from an entire table's data

2004-06-12 Thread TK
At 05:02 PM 6/12/2004 +0200, Harald Fuchs wrote:
>Other DBMSs like PostgreSQL grok indexes on functional expressions;
>MySQL doesn't.  Thus your only choice seems to be storing the
>uppercased initial in a separate column and putting an index on that
>column.

As I indicated, I already tried that it had no effect at all.  MySQL still scans the 
whole table, still doesn't use any indexes, and takes forever at it.

Thanks,

TK

>In article <[EMAIL PROTECTED]>,
>TK <[EMAIL PROTECTED]> writes:
>
>> I'm still trying to come up with an efficient way to query my table of names for 
>> all first initials.  Seems to have stumped everyone.
>> I.e. There are 50,000 names, and I want the final result to be:
>> A, B, C, F, H, I, J, K...
>> That is, a list of all first initials that are actually present in the data (and 
>> ideally are also used in a joined table).
>
>> I haven't been able to think of a way to do this efficiently.  My current query 
>> looks like this:
>>  select DISTINCT UPPER(LEFT(n.Name,1)) as Initial
>>  from Names n, Things t
>>  where n.ID = t.ID
>>  order by Initial desc
>
>> Even if I eliminate DISTINCT, or create a single character index on
>> Name, or create a whole field that just has the first character of
>> Name, I can't figure out how to get MySQL to not have to scan the
>> entire table.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: indexing text

2004-06-12 Thread Johannes B. Ullrich

> I have a table that has a few short text fields [text(4000), text(1000)] 
> I would like to index. Do you think it is a good idea to index them 
> "simply", or is it better if I create auxilary fields which hold the MD5 
> for the text fields and index those? Would that be faster?

Try 'Fulltext' indexes. However, they will only work if you are
using myiasm tables. They do not work on innodb tables. Instead
of using 'LIKE', you should use 'MATCH' to take advantage of 
these indexes.

http://dev.mysql.com/doc/mysql/en/Fulltext_Search.html


-- 

Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
contact: http://johannes.homepc.org/contact.htm

There are two kinds of system administrators:
The first kind solves problems with little shell scripts.
The second kind are the problem.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Bug in 4.0.20

2004-06-12 Thread Johannes B. Ullrich
On Fri, 2004-06-11 at 08:50, Egor Egorov wrote:
> Wendell Dingus <[EMAIL PROTECTED]> wrote:
> 
> > RedHat Enterprise 3WS, fully up2date. MySQL binary RPMs for AMD64 won't even
> > start for me, what were they built on?. I installed the .src.rpm and built one
> > myself (-bb --target amd64). Installed that and all seems well.
> 
> This seems strange. Can you provide me with the details (mailto:[EMAIL PROTECTED], 
> please)?
> We will investigate and possibly fix the AMD build. 
> 
> And, of course, to eliminate bugs, it's better to run MySQL prebuilt 
> binaries on Linux.

BTW: I had the same issue with MySQL 4.0.20 on Suse 9.1 Pro.
I ended up rebuilding them (--target x86_64) and again, it
worked fine. Some observation:

mysqld crashed on start.
mysql-max looked like it worked ok.
mysql client crashed on start with segfault.

I put up my rebuild binaries here:

http://www.cablemodemhelp.com/mysql/

all I did was 'rpmbuild --rebuild --target x86_64' using the
4.0.20 src rpm. 




-- 

Johannes Ullrich [EMAIL PROTECTED]
pgp key: http://johannes.homepc.org/PGPKEYS
contact: http://johannes.homepc.org/contact.htm

There are two kinds of system administrators:
The first kind solves problems with little shell scripts.
The second kind are the problem.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow query: Getting first initials from an entire table's data

2004-06-12 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
TK <[EMAIL PROTECTED]> writes:

> I'm still trying to come up with an efficient way to query my table of names for all 
> first initials.  Seems to have stumped everyone.
> I.e. There are 50,000 names, and I want the final result to be:
> A, B, C, F, H, I, J, K...
> That is, a list of all first initials that are actually present in the data (and 
> ideally are also used in a joined table).

> I haven't been able to think of a way to do this efficiently.  My current query 
> looks like this:
>  select DISTINCT UPPER(LEFT(n.Name,1)) as Initial
>  from Names n, Things t
>  where n.ID = t.ID
>  order by Initial desc

> Even if I eliminate DISTINCT, or create a single character index on
> Name, or create a whole field that just has the first character of
> Name, I can't figure out how to get MySQL to not have to scan the
> entire table.

Other DBMSs like PostgreSQL grok indexes on functional expressions;
MySQL doesn't.  Thus your only choice seems to be storing the
uppercased initial in a separate column and putting an index on that
column.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Import Strategies

2004-06-12 Thread Hassan Schroeder
David Blomstrom wrote:
Talk about lousy timing! I planned on staying up all
night to get my database squared away so I can launch
my websites in the next couple days. But the 30-day
trial expired on all the MySQL GUI's I downloaded, and
I can't import comma delimited files with phpMyAdmin.

Can you recommend any other programs that would do the
job, either freeware or free trialware? Or is there
some other strategy I could use?
mysqlimport
  
`LOAD DATA`
  
Strategy: use the tools that come with the program  :-)
FWIW!
--
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com
  dream.  code.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replication speed

2004-06-12 Thread François Schiettecatte
Hi
I am running into an interesting issue with replication speed. The 
setup I have is very simple, one master and one slave. The master 
processes a lot of inserts and updates all the time and these are 
replicated to the slave. The slave is read from all the time.

The odd thing is that since I switched from using myisam to using 
innodb, the slave is not always able to keep up with the master, and 
lags behind. Further the machine load (as indicated by 'top') is always 
higher on the master than on the slave. However when the load on the 
master goes down (fewer inserts and updates), the slave will catch up.

Any help or insight would be greatly appreciated.
Cheers
François
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: [SOLVED?] Access Denied for CREATE TEMPORARY TABLE

2004-06-12 Thread Robert Paulsen
On Friday 11 June 2004 06:59 pm, Robert Paulsen wrote:
> I must be missing something about "create temporary table". Here are two
> sql commands. The first works the second fails:
>
>   CREATE   TABLE mytable  (id int(10) NOT NULL auto_increment, data
> varchar(255), PRIMARY KEY (id) ); CREATE TEMPORARY TABLE mytable2 (id
> int(10) NOT NULL auto_increment, data varchar(255), PRIMARY KEY (id) );
>
> Prior to issuing the above commands I used the following grant command:
>
>   GRANT ALL ON MYDB.* TO [EMAIL PROTECTED] identified by 'password'
>
> I also tried the following:
>
>   GRANT CREATE TEMPORARY TABLE ON MYDB.* TO [EMAIL PROTECTED] identified by
> 'password'
>
> but it didn't help.
>
> What am I missing?
>

Well, I haven't done an exhaustive analysis, but I *think* there is a mysql 
bug (I'm at 4.0.18).  The GRANT statement is not supposed to require a FLUSH 
PRIVILEGES and this seems to be true for the CREATE TABLE privilege but not 
for the CREATE TEMPORARY TABLE privilege. I say this because my problem went 
away when I used mysqladmin to "flush-privileges".

-- 
Robert C. Paulsen, Jr.
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: autoincrement problem

2004-06-12 Thread Osvaldo Sommer
For your problem, you need to create a table with a pool of availables
key, and when you use one, you delete from this table and when you don't
need it, return it here.

Osvaldo Sommmer

-Original Message-
From: Nitin [mailto:[EMAIL PROTECTED] 
Sent: Saturday, June 12, 2004 3:44 AM
To: MySQL Mailing List
Subject: autoincrement problem

Hi all,

What will be the best way to implement auto increment field. I dont want
to use auto increment feature, as it only appends the numbers but
doesn't check for the values deleted. Suppose, following are the values
in table

1abc
2bcd
3cde
..
..
..
9xyz

now if 2nd and 3rd rows are deleted, an autoincrement field will still
assign 10, 11, 12 to the new values, while I want it to assign the
values deleted from the table first.

What will be the best way to implement it?

Thanks in advance
Nitin

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.703 / Virus Database: 459 - Release Date: 6/10/2004
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.703 / Virus Database: 459 - Release Date: 6/10/2004
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: autoincrement problem

2004-06-12 Thread Nitin
First of all, thanks for replying.

but, u didnt get my point. I want to use numeric field only as the key
field, but not the normal auto increment..


- Original Message - 
From: "Daniel Kasak" <[EMAIL PROTECTED]>
To: "Nitin" <[EMAIL PROTECTED]>; "MySQL Mailing List"
<[EMAIL PROTECTED]>
Sent: Saturday, June 12, 2004 3:53 PM
Subject: Re: autoincrement problem


> Nitin wrote:
>
> >Hi all,
> >
> >What will be the best way to implement auto increment field. I dont want
to use auto increment feature, as it only appends the numbers but doesn't
check for the values deleted. Suppose, following are the values in table
> >
> >1abc
> >2bcd
> >3cde
> >..
> >..
> >..
> >9xyz
> >
> >now if 2nd and 3rd rows are deleted, an autoincrement field will still
assign 10, 11, 12 to the new values, while I want it to assign the
values deleted from the table first.
> >
> >What will be the best way to implement it?
> >
> >Thanks in advance
> >Nitin
> >
> >
> You shouldn't use text fields as primary keys. It's much faster and more
> efficient to use a numeric field.
> You're also asking for trouble trying to re-use keys. What happens if
> someone has a key 'bcd' and then their record gets deleted, and someone
> else gets the key 'bcd'. You go back over data and see reference to
> 'bcd', and have no idea what it's referring to unless you get the
> transaction logs out and check what data the key was referring to on
> that particular day. It will be an absolute nightmare to debug, and you
> destroy any sane way of auditing your data.
> The best way to implement it, therefore, is to *not* implement it. Use
> MySQL's auto_increment field as the primary key. If you absolutely
> *must* reuse the text 'keys' you have above, then that's up to your
> application to handle. Make a char / varchar field and have your code
> scan the table for the next available key and then use that in your
> 'insert' statement. But it's a *really* bad idea.
>
> Dan
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: autoincrement problem

2004-06-12 Thread Daniel Kasak
Nitin wrote:
Hi all,
What will be the best way to implement auto increment field. I dont want to use auto 
increment feature, as it only appends the numbers but doesn't check for the values 
deleted. Suppose, following are the values in table
1abc
2bcd
3cde
..
..
..
9xyz
now if 2nd and 3rd rows are deleted, an autoincrement field will still assign 10, 11, 
12 to the new values, while I want it to assign the values deleted from the table 
first.
What will be the best way to implement it?
Thanks in advance
Nitin
 

You shouldn't use text fields as primary keys. It's much faster and more 
efficient to use a numeric field.
You're also asking for trouble trying to re-use keys. What happens if 
someone has a key 'bcd' and then their record gets deleted, and someone 
else gets the key 'bcd'. You go back over data and see reference to 
'bcd', and have no idea what it's referring to unless you get the 
transaction logs out and check what data the key was referring to on 
that particular day. It will be an absolute nightmare to debug, and you 
destroy any sane way of auditing your data.
The best way to implement it, therefore, is to *not* implement it. Use 
MySQL's auto_increment field as the primary key. If you absolutely 
*must* reuse the text 'keys' you have above, then that's up to your 
application to handle. Make a char / varchar field and have your code 
scan the table for the next available key and then use that in your 
'insert' statement. But it's a *really* bad idea.

Dan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Nested Records...

2004-06-12 Thread Yves Goergen
On 12.06.2004 09:36 (+0200), Ron Gilbert wrote:
CREATE TABLE `Comments` (
   `ID` int(10) unsigned NOT NULL auto_increment,
   `ArticleID` int(10) unsigned default '0',
   `ParentID` int(10) unsigned default '0',
   `DateAdded` datetime default '-00-00 00:00:00',
   `Body` mediumtest,
you mean mediumtext, right?
   PRIMARY KEY  (`ID`),
   KEY `ID` (`ID`)
I believe the primary key is sufficient.
)

I can do this in PHP by issuing a seperate query for each parent comment 
to get the children, and then nesting through them, etc.  Or I can get 
all the records back and sort/nest them once in PHP.  I would rather 
have MySQL just return them all to me in the right order, if possible.
Yes, that's how I do it for a similar problem. You can either run a new 
query SELECT... WHERE ParentID=... for each ArticleID WHERE ParentID=0 
and recurse this for every new ID, or do this yourself in PHP by getting 
all records in a single step and go through the entire array and scan 
the IDs for yourself. This shouldn't be too much work to do so I'd 
suggest you to do some tests to find out what is faster.

I'm not aware of a way to get the nested structure directly from MySQL, 
but that doesn't have to say too much... ;) I mean I couldn't imagine 
how this should look like other than a flat array you have to split 
manually.

--
Yves Goergen <[EMAIL PROTECTED]>
BlackBoard Internet Newsboard System --> blackboard.unclassified.de
Free (GPL), easy to use and install, secure, innovative! (PHP+MySQL)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


autoincrement problem

2004-06-12 Thread Nitin
Hi all,

What will be the best way to implement auto increment field. I dont want to use auto 
increment feature, as it only appends the numbers but doesn't check for the values 
deleted. Suppose, following are the values in table

1abc
2bcd
3cde
..
..
..
9xyz

now if 2nd and 3rd rows are deleted, an autoincrement field will still assign 10, 11, 
12 to the new values, while I want it to assign the values deleted from the table 
first.

What will be the best way to implement it?

Thanks in advance
Nitin

Import Strategies

2004-06-12 Thread David Blomstrom
Talk about lousy timing! I planned on staying up all
night to get my database squared away so I can launch
my websites in the next couple days. But the 30-day
trial expired on all the MySQL GUI's I downloaded, and
I can't import comma delimited files with phpMyAdmin.

So I just wondered what strategies are available. I
would like to buy another MySQL client when I can
afford it, but I haven't decided which one. I think I
liked Navicat the best, though it's kind of expensive.
I was also using MySQL-Front and SQLyog.

Can you recommend any other programs that would do the
job, either freeware or free trialware? Or is there
some other strategy I could use?

Thanks.





__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow query: Getting first initials from an entire table's data

2004-06-12 Thread TK
I'm still trying to come up with an efficient way to query my table of names for all 
first initials.  Seems to have stumped everyone.

I.e. There are 50,000 names, and I want the final result to be:
A, B, C, F, H, I, J, K...
That is, a list of all first initials that are actually present in the data (and 
ideally are also used in a joined table).

I haven't been able to think of a way to do this efficiently.  My current query looks 
like this:
 select DISTINCT UPPER(LEFT(n.Name,1)) as Initial
 from Names n, Things t
 where n.ID = t.ID
 order by Initial desc

Even if I eliminate DISTINCT, or create a single character index on Name, or create a 
whole field that just has the first character of Name, I can't figure out how to get 
MySQL to not have to scan the entire table.  I get an EXPLAIN that looks like this:

+---+--+---+---+-+-+---+---+
| table | type | possible_keys | key   | key_len | ref | rows  | Extra 
|
+---+--+---+---+-+-+---+---+
| n | ALL  | PRIMARY,ID | NULL  |NULL | NULL| 57674 | Using 
temporary; Using filesort   |
| t | ref  | ID | ID |   5 | n.ID | 4 | where used; Using index; 
Distinct |
+---+--+---+---+-+-+---+---+

Is there any way to do this, or an efficient way to query the table 26+ times with a 
list of first initials?

(My actual query examines 166,000 rows and takes 12 seconds to run, all to give me a 
list of most of the alphabet!)

Thanks in advance,

TK


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Nested Records...

2004-06-12 Thread Ron Gilbert
Here is my problem. I want to create a simple message posting system so 
users can post comments to news stories.  I would like the comments to 
be nested, and not appear as a single long list.

Is there away to retrived all the comments from a table order by the 
date the parents were added, but nested children under the parent 
comment, ordered by date, and so on...?

Oracle has this CHILD command will do this, but it seems like this 
should be possible using GROUP BY.  If it is, I can't figure it out.

My Comment table looks like this:
CREATE TABLE `Comments` (
  `ID` int(10) unsigned NOT NULL auto_increment,
  `ArticleID` int(10) unsigned default '0',
  `ParentID` int(10) unsigned default '0',
  `DateAdded` datetime default '-00-00 00:00:00',
  `Body` mediumtest,
  PRIMARY KEY  (`ID`),
  KEY `ID` (`ID`)
)
What I would like to get back is:
Comment 1
Comment 2
  Comment 2a
  Comment 2b
Comment 3
  Comment 3a
I can do this in PHP by issuing a seperate query for each parent comment 
to get the children, and then nesting through them, etc.  Or I can get 
all the records back and sort/nest them once in PHP.  I would rather 
have MySQL just return them all to me in the right order, if possible.

Thanks, Ron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]