Just add another file entry to my.cnf seperated by commas... You can add as
many seperate files as you like... (within reason of course, eventually if
the line length gets longer than 1024 or so you cant add anymore). Once you
edit the file, restart mysql and innobase will detect the new file entr
My experience (atleast with the 3.23 series) is that full text indexing is
not worth the hassle if you have an existing HUGE database... I let mysql
run for about a day or two on a 20gb database (dont remember exactly how
many rows it had) before giving up (note this was using 3.23.39). I found on
There are many different things that could cause this error. Did you check
the mysql error log for helpful information?
Most likely the problem is not with Apache, its either mysql or your
application. One source of that error message is mysql crashing and
burning... I have seen it a few times on
Problem with using a regular expression (or even a simple wildcard search)
is that they are very slow. I have looked for a better solution to search
through a mysql database but didn't find anything that worked great with
mysql. If speed is important then you are pretty much out of luck without
up
> So I designed my "files" and "people" tables without any direct
> relationship with one another, thinking to link them with the SELECT
> statement.
>
> What I completely forgot, up until this point, was that I would need to
> INSERT these records (from pre-written HTML/PHP forms), and there is n
Innobase allows you to specify multiple data files and will use them
automatically. So to keep under the 2gb limit just keep adding 2gb files as
needed. You can see how much space is left in the innobase data files by
doing the following query: "show table status from 'dbname' like
'tablename'".
I am curious, what is your reasoning for being a fan of single-table
databases?
The number of columns a table may have depends on the table type you are
using, this should be in the documentation. Your decision to put 20
judges... 3 rounds... 17 categories into a single table probably is (well
al
I am not aware of any document that describes naming conventions, but
through my use of databases and design I have found a scheme that works for
me. I follow these rules:
1. The autoincrement field (or unique id) for a table is named "tablename"
++ "id". So if the table is "books" the unique ide
[snip]
From: "Jonas Larsson" <[EMAIL PROTECTED]>
> But wouldn't it be possible to let MySQL use MD5-hashed pw like UNIX? Why
> isn't that implemented? Then it would be easy to transfer the hash...
>
Yes it definitly would be nice and would have been easier to impliment than
the current Password()
Your original message got through the first time, but your email bounced.
I think what you are looking for is called mifluz and is the indexing
library that htdig uses. The link is http://www.gnu.org/software/mifluz/ .
If you develop any kind of bindings to use mifluz to index a mysql database
l
Yes. delete all ib_ files and all data files specified in my.cnf... and then
run mysql again.
ryan
- Original Message -
From: "alexus" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, August 27, 2001 5:16 PM
Subject: MySQL+InnoDB
> hi
>
> i'm trying to setup mysql w/ InnoDB
>
As stated on the bugs and fixes page on http://www.innodb.com/bugfixes.html
there is a problem with the latest innodb code that causes corruption when
doing updates... I believe I have run into that problem... First mysql would
hang.. and do nothing. After killing it and restarting it the error lo
I think you have bigger issues going on with your setup than killing a perl
script causing corruption issues. I have experienced very little (almost no)
corruption with any of my mysql tables using various versions of mysql for
extended periods of time (the times the tables were corrupted it was u
Sure this is possible, take a look at the 'grant' command... and for each
local user add their mysql account giving them access to their database
only.. then other users will not be able to read any db but their own.
ryan
- Original Message -
From: "Ulv Michel" <[EMAIL PROTECTED]>
To: <[
[snip]
> I'm very sad this is all happening. But however you may feel
> about NuSphere, I find it a bit unsettling that MySQL AB, as
> an open-source organization, would claim a trademark in the
> first place. From what I've seen, it looks like both sides
> are being equally childish.
I dont thi
Look into MAX_ROWS... ie:
alter table mytable max_rows = 1
ryan
- Original Message -
From: "Nathanial Hendler" <[EMAIL PROTECTED]>
To: "MySQL" <[EMAIL PROTECTED]>
Sent: Wednesday, August 15, 2001 12:19 PM
Subject: Table size limitations...
>
> I have a table that holds a lot o
After succesfully upgrading to .39b things seemed to be running
great... However the past few days I have been having some problems with
queries not completing. This only seems to occur during higher loads and I
am not sure where to begin debuging. I was hoping someone has some ideas...
Here is t
You might want to look into what your ulimit settings are for the process
starting mysql... usually this can be done with 'ulimit -a' but check your
shells man page for more info. You could have a memory limit set for the
process and mysql is not able to use the amount of memory you specify.
Keep
A numeric value will be faster in most cases. The reason being that a
comparison between two numbers at most takes 1 comparison... but a
comparison for a char at most takes n comparisons (n being the length of the
field/string). In addition a numeric value would take less space... all
around its a
If you want a MyISAM table to grow larger than 4GB you have to do
alter table tablename max_rows=1
or maybe its 'maxrows'...
Once you do that, you will be able to stuff the table as full as the OS will
let you.
Innobase will allow you to get around this by making a bunch of smaller
fil
When you are designing a database and you are thinking about creating a
comma delimted list, this is a good sign that you need to rethink your
design. Bitfields are a good option, however if you ever need to add
elements to the bitfield (ie bitfield A can signify the presence of 4
elements, but no
To sum it all up, I am trying to import data from text files into an
innobase table (about 12m rows of data that is stored in text files created
by doing a select into outfile). First I tried dropping the indexes before
doing load data infile, but I found that I didnt have enough table space to
re
I would suggest that you use an INT column type to store the ip address.
This would use a lot less space than a varchar column (when dealing with the
magnitude that you describe)... the smaller the faster... The following
functions make this very easy:
INET_NTOA and INET_ATON
They are described
I think mysqld may have a problem parsing long lines in my.cnf. I recieve
the following error when trying to start mysqld:
/usr/local/libexec/mysqld: unrecognized option `--r3/ibdata31:2G'
Here is the line that is offending mysqld (it is all one line in my.cnf). If
I remove or shorten this line
If this is a MyISAM table you can use myisamchk to set the auto_incriment
value. Check out the man page.
ryan
- Original Message -
From: "Stefan Hinz" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; "Peter Wiherkoski"
<[EMAIL PROTECTED]>
Sent: Wednesday, July 25, 2001 12:15 PM
Subject: Re:
I think the main disadvantage would be the memory... Running two servers
means it would use twice the amount of memory... So say you are using 256mb
for the key cache... running two servers would use 512mb (not the exact
moment the servers start but after a prolonged period of time of course).
Thi
The problem is just as it states, "Access denied" ... You can also find the
information that the username is 'root' and a password was supplied.. so the
first thing to check out would be the password the script is using to
connect with. Once that is corrected it should work fine.
Hope that helps.
First off, what BSD? There are a few of them... and what version?
Sometimes --with-low-memory doesnt seem to work... What I have done in the
past to get it to compile is edit the Makefile in that directory, find the
line that compiled sql_yacc.cc, and add the compiler flag -O0 ... Sure that
turns
I have a table that were created using innobase compiled with 64kb page
tables. Now I want to upgrade to the newer version of innobase that supports
the larger row sizes but as it mentions on the innobase .39b download page
( http://www.innodb.com/download.html ) I must export the tables, and then
perhaps give a little more info as to what is on your screen before the
error?
ryan
- Original Message -
From: "sherzod ruzmetov" <[EMAIL PROTECTED]>
To: "MySql Mailing List" <[EMAIL PROTECTED]>
Sent: Wednesday, July 11, 2001 1:22 AM
Subject: Installing DBI
>
> I can install other modu
I was not aware of placeholders, and the benifits of using them instead of
using $dbh->quote(). It doesnt make sence that the DBI version of quote isnt
as thorough as having the code behind placeholding do it. But anyhow, I have
a few questions as to how this works. Here is an example from the Per
Most likely your size limitation is comming from the data type used for
'notes'. Check out the documentation for the size of the text types,
mediumtext may be more apropriate for you.
ryan
>
> I'm running into an odd string size limitation. I've traced it down as far
> as I can get, and I didn't
Seems that you are not taking advantage of Perl. This is what you can do:
$parentid = x;
$orderby = 'DESC';
my $sth = $dbh -> prepare (qq{
SELECT message.name, contents, user.name, message.id
FROM message, user
WHERE folder='N' and parentid=$parentid
GROUP B
If you want a copy of the DATABASE and not of a table, I would suggest
(using unix) ...
cp -R /products /products1
Where the datapath is the dir that contains all the databases.. in my setup
I usually make it /var/mysql... but often its in other places.
ryan
- Original Message -
From:
> Hi,
>
> I am working on a banking project.
>
> we are planning to take our tour into vb/mysql combination.
>
> Please help me sending your comments.
Where would you like help sending my comments to? heh.
It sounds like you are asking for comments reguarding your choice of
vb/mysql... If that i
Duplicate entry for 'x' for key y means that there exists a unique key (key
1) on a certain column that already contains the entry '4'.. You are trying
to insert something with '4' and hence it complains since that key says it
must be unique.
In order to find out what line is #46... its the 46th
> Heikki> Hi!
> Heikki> I checked that in the COUNT(*) below InnoDB does
> Heikki> not fetch the whole row if it does the query through
> Heikki> a secondary index. It may sometimes have to
> Heikki> fetch it to determine if the row should be visible
> Heikki> in the current consistent read, the t
When you delete a row from a table that may be referenced in another
table... you must delete all references to it from the other tables. This is
done in the same way in which you deleted the first row...
ryan
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> As long as MySQL doesn't manage
> ryc>
> Heikki> Well, mystery solved :). Except that why MySQL uses hours to
> Heikki> sieve away the extraneous rows in the first query.
> >>
> >> To be able to answer the question why:
> >>
> >> SELECT COUNT(*) FROM table1 WHERE col1=
> Heikki> Well, mystery solved :). Except that why MySQL uses hours to
> Heikki> sieve away the extraneous rows in the first query.
>
> To be able to answer the question why:
>
> SELECT COUNT(*) FROM table1 WHERE col1=0 and col2=0
>
> Is slow, I would need to get an EXPLAIN for this query.
mysq
The OS may not have a 2GB limit, however the table MAY. If you do a
myisamchk on the table it will tell you if you are running out of space in
the table. If you are.. you need do something like "alter table tablename
max_rows=1000".. (its in the manual)
ryan
> > I have a Linux machine (Linux
> >heh, my ($id) = $dbh->selectrow_array("SELECT MAX(id) FROM tbl;");
>
> You're in for a VERY nasty surprise as soon as two of these happen to
> run in parallel.
I agree, you may get another insert before that select occurs and end up
getting the wrong id. You should use "SELECT LAST_INSERTID()"
Woops sorry about that last message.
Try using the following:
my $id = $sth->{insertid};
That is what I had to change it to, to get it working.. despite what the
documentation says.
GL.
ryan
> > My ISP recently updated one of their servers to 3.23.38, and all of a
> sudden
> > a BUNCH of my
I had the same problem when upgrading... and I found that using
> My ISP recently updated one of their servers to 3.23.38, and all of a
sudden
> a BUNCH of my code stopped working. I use a number of insert queries
(new
> items, new search caches, etc), where I insert a blank record, and then
t
Keep in mind the version of innobase that I am using is compiled with a page
size of 64k.
I have a table of 10million rows (give or take, select count(*) from
tablename is slow for innobase tables so I dont do it often). Everyday I
need to run a query that selects rows that are marked for process
When begining and commiting a transaction using perl and DBI, does the
commit need to come from the same statement handle as the begin or does it
just need to be from the same database connection handle?
Thanks.
ryan
-
Before
My application needs to insert between 200-500k rows about once a day...
When using MyISAM tables I determined the best way to get this done (while
still allowing users to perform select statements from the table) was to use
mysqlimport with --low-priority. This way all the inserts get bundled up
Yes the server must be able to open the file. I dont know how you would
write a file path that accesses another machine heh.
ryan
> In order to add data to mysql, using LOAD INTO, am I right in thinking
> that this file has to be local to the machine. Therefore I have to ftp
> the file I want
I am aware that currently innobase tables can not have abritararly large
text/varchar fields and support for this will be released sometime in the
month of June. The application I am working with requires text fields
(usually less than 64k), and I desperatly need row level locking. Either I
contin
For passwords it is good to use the sql function password(). This is done
like this:
to insert:
insert into users ('username1234', password('userspassword') );
to load
select * from users where username='username1234' AND
password=password('userspassword')
Hope this helps.
ryan
> Hi,
>
> > >
> does not and is quite fast. Somehow the optimizer is getting confused
> by the "sid = 16". I was thinking that perhaps there is some magic
> way of rephrasing the problem that MySQL would understand better.
> Failing that, what is the recommended way of making this query
> quicker? Should I i
With the correct indexes this query should run in less than a second.
Basically what you should do is make an index for every column that is used
in the where clause. For instance, if you had a query ... WHERE A=X AND B=Y
AND C=Z (A,B,and C are in the same table) you would create an index (A,B,C)
I believe what you are talking about is an inverted text index where the
word is used to lookup what document it occurs in. To create this index you
would have to make a program that takes a "document" (be it a row in the
database or a file or whatever), splits it up into words, inserts these
wor
As far as I know, they should be the same. The only case in which I can see
count(colname) would be slower is if it doesnt count rows that contain null
values for that column (this is only a guess, I dont know if count does that
or not).
ryan
> Would count(*) show the same performance as count(1
Changing to C++ is not likely to give you a noticable speed difference
because your bottleneck is not the code but the queries. With proper
database design you should be able to acheive those results with one query,
and it should be fast if given the right indexes.
ryan
> I am searching to spee
engine in MySQL ???
>
> > -Original Message-----
> > From: ryc [SMTP:[EMAIL PROTECTED]]
> > Sent: Tuesday, May 15, 2001 3:29 PM
> > To: James Treworgy; Chris Nichols
> > Cc: [EMAIL PROTECTED]
> > Subject: Re: Re[2]: MySQL FullText improvements in V4.0
>
Varchar fields are not binary, blobs are.
ryan
> Hi & Help!
>
> I would like to store some short binary data (encrypted). I used a
> varchar(24) binary field for this.
> But it seems that MySQL cuts white-spaces from the end when storing the
> data in the table. When the encrypted string now ha
What kind of speed can you expect to get from this after tbl_doc_index fills
up with tens of millions of rows? Is this scheme suitable for that magnitude
of data?
Also, if you wish to generate a query that generates all documents that
contain token x or token y, would mysql neglect to use the key
Refer to the man page for myisamchk
ryan
> Hi,
>
> how I can change AUTO_INCREMENT value for one concrete table in mysql
> database? (e. g., this value you can get from 'show table status'). Thanx!
>
> oSup
>
>
>
> e-mail: [EMAIL PROTECTED]
> ICQ: 10798264
> tel.: 06
gt;
> -Original Message-
> From: ryc [mailto:[EMAIL PROTECTED]]
> Sent: Friday, May 11, 2001 11:58 AM
> To: 'Richard Reina'; [EMAIL PROTECTED]
> Subject: Re: Hom many columns is too many?
>
>
> I dont think the problem was redudant data. He is talking about 75
I dont think the problem was redudant data. He is talking about 75 distinct
fields that no other table contains and ALL data is related to the contact.
It makes sence that a table with more columns would have a higher overhead
when parsing the queries, but but other than that I dont _think_ there
If you wanted to do this using a SELECT you would do a LEFT JOIN or
equivalent action. Glancing over the DELETE syntax in the documentation it
does not look like DELETE supports any such mechanism. I dont think it is
possible to involve more than one table in a DELETE query.
ryan
> I have two ta
I would suggest creating a new table to hold vendor information. Then remove
the varchar vendor field in the parts table and replace it with an integer
that represents the vendorid from the vendor table you just created. This
should speed things up consideribly. You can do a left join any time you
Perhaps you can type the query in notepad, and then past it into the mysql
client's window.
If that is not suitable and you still want to go with the batch file scheme,
type "type batchfilename | mysql -u ... "
ryan
> Hi all,
> unfortunately, I am forced to do some background mysql development
To import you should do
cat filename | mysql -uusername -ppassword databasename
ryan
- Original Message -
From: "Rachel-MY" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 01, 2001 9:38 PM
Subject: Problem with mysqldump
Hi everyone,
I'm trying to backup al
> > I have a fairly large table (greater than 4mil rows) that I would to
preform
> > a query like:
> >
> > SELECT col1,col2 FROM table WHERE a=1, c=2 ORDER BY b DESC LIMIT 50;
> >
> > I have an index on the table INDEX1( a,b,c );
> >
> > When running the query as is, it takes around 4seconds. If I
I have a fairly large table (greater than 4mil rows) that I would to preform
a query like:
SELECT col1,col2 FROM table WHERE a=1, c=2 ORDER BY b DESC LIMIT 50;
I have an index on the table INDEX1( a,b,c );
When running the query as is, it takes around 4seconds. If I omit the "DESC"
part the que
It will run just fine, you need the patched version if you want transaction
support.
ryan
> Hi there
>
> I have managed to download the "Berkeley for Windows" installation,
however
> I have noted that MYSQL will not run without a patched version of
> berkeley's. Where can I obtain such a instal
Type "perror #error_num" at the prompt, you can get what the error codes
mean from there.
ryan
> Hello,
>
> A few days ago I asked if anyone knows the numeric values of the
> Mysql error codes. Nobody responded, so I looked through the source
> code to find the answer. This info would be valu
I have a auto increment field in a table and after doing a few operations
the flag has gone away. This is using mysql 3.23.36 on OpenBSD. Here is how
it happened:
I am using perl with DBI to interface with mysql. I have been inserting
"NULL" into this field to allow mysql to fill in the auto_incr
You need to change the "maxrows" on the table. It is in the documentation, I
dont remember the exact syntax something like "alter table tablename set
maxrows=1" for tables larger than 2gigs.
ryan
> I've got tables in ISAM format whose indexes are way over 64 MB. However,
> when I try to
> I have some VARCHAR columns (name, e-mail address and subject) that are
> used in every query, though. Should I change them to CHAR(255) so that I
> can do this? (On a related note, does anyone know whether the RFC for
> e-mail messages limit the subject length/name length/e-mail address length
Messages that are around 10k should be just fine, as long as they are the
majority.
A few tips for you... Store all the varchar/text columns for each message in
a seperate table so that the primary message table only has fixed length
fields (ie messageid, size, #of replies, userid, bodyid, ...).
When performing a query that will return a large dataset (by large I mean
100k+ rows), is it more effecient (memory usage wise) to use the results as
you fetch them or fetch all the results into an array, free the statement
handle, and the process from array? What about performance wise? I am usin
What you wrote will work just fine. It is very easy to create a test case
and see if it works, saves you the trouble of posting to the list.
mysql> create table test (a int, b int, c int);
Query OK, 0 rows affected (0.09 sec)
mysql> insert into test values (2,3,5);
Query OK, 1 row affected (0.00
# perror 139
Error code 139: Unknown error: 19
139 = Too big row (>= 16 M)
This means the 28MB row you tried loading is bigger than the max allowed
packet size. You might want to change the max_allowed_packet variable (ie -O
max_allowed_packet=32M or something similar on the commandline).
The d
You can make a script that formats all the file names into a text file and
use mysqlimport or "load data infile" to import them into the database. Or
you can use a scripting language like perl to get all the file names and
insert them into the database.
hope that helps.
ryan
- Original Mes
I believe the problem is with your use of "LOCAL". This is telling the
server to try to load the file on the computer it is running on. If this is
not where the file is on the computer running mysqld this could explain why
it cant read it. Try removing the "LOCAL" part, you also might want to
enab
I have been getting the same error message, but using perl and DBI. One
thing it MAY be but I haven't been able to pinpoint it is when for some
reason my program dies because of a bad query or the mysql server getting
sig11. Dunno.
Anyone else have this problem? Anyone know what causes it or if i
You are expecting an awfull lot from mysql to remember data that it never
recorded. When you inserted something into the varchar(50) column that was
longer than 50 it dropped the rest of the string that would not fit.
Changing the column size will not bring back the data.
ryan
> I had a column
Try
mysqldump -u username -p --opt databasename > dumpfile.sql
to get it back into another mysql server you need to create the database you
want to put it in, and then
cat dumpfile.sql | mysql -u username -p databasename
Hope that helps.
ryan
- Original Message -
From: "Jorge Cordero"
I am having trouble getting this query to run faster:
SELECT * FROM t1 WHERE col1 = 'text' OR col2 = 'text';
Here is the output of the explain for that query
+---+--+---+--+-+--+--+-
---+
| table | type | possible_keys | key | key_len | r
You might want to check to see if the server is still running, it could have
crashed and burned in which case you would get than error when trying to
connect.
ryan
> I was looking through my website and everything was fine and then all of a
> sudden I got this error:
>
> ERROR 2002: Can't connec
You will get errors like these if your script does not print out a header
before spitting out other information. It is usually best to print the
header in the very beginning of the script so you do not have to worry about
it. You dont have to wait until you are about to print out some html.
$outp
It is not that mysql it slow, its your database design and the fact you have
2.5 million records in the table.
It is not clear from your description of the problem what you are trying to
do, so giving advice is that much harder. If you are creating an index on
the column hari, it will take a long
Instead of pulling using equals, ie "select * from table where id =
$php_random_variable" you can do this instead "select * from table where id
<= $php_random_variable LIMIT 1"... As long is there is atleast one row in
the table you will get back a result. Hope that helps.
ryan
- Original Me
First let me explain what I am trying to do. I want to archive many many
messages into a mysql database. The format is pretty simple... either the
message is a parent, or it is a reply. The problem is that some of the
queries are taking 5-10 seconds, which is not acceptable for my application.
The
87 matches
Mail list logo