Hello,
I've recently came across a problem I couldn't seem to solve right by myself.
I have a db with objects table, each of those objects may belong to groups
of objects. The number of groups can be about 256 and an object
belongs from one to many different groups at once.
I cannot find a good
there is a better way to
do all this...
This has the same problems as the LIKE '%100%203%' approach, but a full
text search is replaced by math on each row. In both cases you lose the
ability to use any kind of index.
Sorry for this long letter I hope I have managed to put the problem
straight, I
Lecho [EMAIL PROTECTED] writes:
I have a db with objects table, each of those objects may belong to groups
of objects. The number of groups can be about 256 and an object
belongs from one to many different groups at once.
Lecho,
I threw together the following tables/data/queries that I
Please reply to the list, and not to me. Thanks.
Making the MySQL indexes would be your responsibility. Importing the data
would most likely not import the index definitions also. You need to
recreate those. Your explain seems to indicate that you have *no* indexes on
your table. I would
Hi Joshua,
Making the MySQL indexes would be your responsibility. Importing the data
would most likely not import the index definitions also. You need to
recreate those. Your explain seems to indicate that you have *no* indexes
on
your table. I would guess that your query doesn't hang it
Rick,
- Original Message -
From: Rick Ellis [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, April 01, 2004 11:27 AM
Subject: InnoDB problems with 4.0.18-max
Hi Guys,
We are currently using MySQL as the backend to the RT Request Tracker
Ticketing system
I broke it and can't fix it or reinstall it.
I installed the MacOS X package, 'mysql-standard-4.0.18'. It seemed to
be working. I started 'mysqld_safe'. Nothing happened but the prompt no
longer appeared. As per the instructions, I hit control-z and it said
the process was suspended, so I
Hello list,
Recently I've been in the job of migrating a large (about 1.5GB)
database build in MSSQL Server to MYSQL. The migration was done OK, I
used the SQLYog utility to do this. The problem is that one table has
image column types ... I tried to view this column types (blob data
types
You can't? How are you trying to display? What are you using? A CGI
script? A database utility? Something else? We need a bit more information
to answer the question.
j- k-
On Monday 05 April 2004 05:19 pm, Rodrigo Galindez said something like:
Hello list,
Recently I've been
MySQL is very stable on large databases...I would suspect inefficient indexes.
What does your query look like? What is the output when you put EXPLAIN in
front of your query?
I don't know anything about SQLYog blob display, so can't comment there.
j- k-
On Monday 05 April 2004 05:41
Since you have used SQLyog, I will say you contact the
SQLyog people about this.
Karam
--- Joshua J. Kugler [EMAIL PROTECTED] wrote:
You can't? How are you trying to display? What
are you using? A CGI
script? A database utility? Something else? We
need a bit more information
to answer
Hi Guys,
We are currently using MySQL as the backend to the RT Request Tracker
Ticketing system. The problem is that we are seeing total data loss from
the InnoDB after a proper shutdown of the database using mysqladmin
shutdown.
We have observed this once on a Sparc Enterprise 420R with 4 CPU's
Hello, I still trying to get my replication working witout success. I included show
processlist and show slave status. Can anyone make out from them what I am lacking?
I am running mysql 5.0.0 on a linux master trying to replicate them on a windows slave
Best regards
/Jonas
: Jonas Lindén [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 01, 2004 10:33 AM
Subject: Problems replicating
Hello, I still trying to get my replication working witout success. I
included show processlist and show slave status. Can anyone make out from
them what I am lacking?
I am
problems with WAS 5.1 access
MySql
datasources?
Tom,
Make sure you're using
'com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource'
as the
datasource implementation class that you plug into
your WebSphere
config...For some reason, no other classes (Driver
or plain DataSource)
seem
Hello,
I have been running mysql3.23 on FreeBSD server for a long time.Lately i decided to
upgrade mysql3.23 to mysql4 using ports. So i first did make uninstall in
usr/ports/databases/mysql323-client and mysql323-server. Than, from
/usr/ports/databases/mysql40-server i gave make install
MyISAM
tables
http://www.innodb.com/order.php
Register now for the 2004 MySQL Users Conference!
http://www.mysql.com/events/uc2004/index.html
...
List: MySQL General Discussion« Previous MessageNext Message »
From:queritorDate:March 20 2004 3:22am
Subject: InnoDB Hot Backup problems
port when you trying to connect
- that MySQL server is not running with --skip-networking
option.
- MySQL-Server IS running
- I use the standard port 3306
- skip-networking ist not set
I can run the script without any problems on the machine the db
running - I can you phpmyadmin to dump
Jason Unrein wrote:
Before I start, this is a compile problem (or so I
think) and from what little I read in this forum, it
looks ok to post. If not and you know the proper place
to post, please let me know. Now for the good
stuff
I'm attempting to write a simple tool in C that needs
to be
I'm having the following problem while trying to run ibbackup when the
database is using innodb_flush_method=O_DIRECT
This is on Redhat Enterprise 3.0.
As you can see, it's reporting an error code of 0, which supposedly means
'success'
Is there a way around this or will I have to use another file
Jochen,
what's the result of
prompt mysql -uroot -pXXX -hxxx.xxx.xxx.xxx
when you do it on the client host? (I still suspect that permissions are
not properly granted).
Regards, Thomas Spahni
On Wed, 17 Mar 2004, Jochen Kaechelin wrote:
I use the following Script to backup a remote
I use the following Script to backup a remote MySQL-Server.
DATUM=`date +%Y_%m_%d__%H_%M`
BACKUPDIR=/home/jochen/SICHERUNG/MySQL_Dumps/debby/$DATUM
echo
echo Erzeuge Sicherungvereichnis $BACKUPDIR...
echo
mkdir -p $BACKUPDIR
for DB in db1 db2
do
echo
echo Erzeuge
Jochen Kaechelin [EMAIL PROTECTED] wrote:
I use the following Script to backup a remote MySQL-Server.
DATUM=`date +%Y_%m_%d__%H_%M`
BACKUPDIR=/home/jochen/SICHERUNG/MySQL_Dumps/debby/$DATUM
echo
echo Erzeuge Sicherungvereichnis $BACKUPDIR...
echo
mkdir -p $BACKUPDIR
for DB
port when you trying to connect
- that MySQL server is not running with --skip-networking
option.
- MySQL-Server IS running
- I use the standard port 3306
- skip-networking ist not set
I can run the script without any problems on the machine the db
running - I can you phpmyadmin to dump
it as the libmysqlclient.a wasn't compiled statically
like I understood it to be. Is this so? If so, is
there any way for me to compile my program statically
without having to recompile libmysqlclient.a
statically?
I've tried linking with -lnsl and -lz (the z resolved
the compress problems but nsl won't get rid
Abubakr wrote:
Hi,
i m using mysql 4.0.15 and tomcat 4 as a webserver on linux 8 machine, now the
problem that i am facing is that while testing my web application when i send too many
refresh requests to the web page, the server's CPU utilization reaches to 100% for a
very long duration of
Hi,
i m using mysql 4.0.15 and tomcat 4 as a webserver on linux 8 machine, now the
problem that i am facing is that while testing my web application when i send too many
refresh requests to the web page, the server's CPU utilization reaches to 100% for a
very long duration of time and on
Hi Marie!
OK, first of all, you could download a rpm package and have redhat rpm
system manage all the installation details for you, this one has a basic
advantage, using rpm system that comes with redhat you'll be eable to
centrally manage all your packages automatically, wich is very nice
thing
Hello,
I'm very new to this Linux OS, and just downloaded MYSQL and I'm having a problem.
I downloaded your Mysql version Linux (x86, libc6) and once downloaded I was unpacking
the files from File-Roller that was part of the Linux Redhat installation. When I
went to extract the files and
Marie,
Welcome to the wonderful world of Linux! Glad to hear your making the
jump:) One of the major drawbacks (or bonuses depending on your point
of view) is that you have to be familiar with the command line.
You mentioned you are on Redhat. There are two options here--either you
could
Hi,
I am currently experiencing a few problems getting mysql to run on my PC. I
have a windows XP machine P4 2.6 GHZ. I have downloaded the version mysql
4.0.15 from mysql.com and am struggling to get the program to work. I have follwed
the instructions listed in a book i purchased called
First of all, get rid of Windows and use Linux =)
Secondly, try downloading the newest version 4.0.18-nt from mysql.com. That
will help you out. I run both Windows, Mac, and Linux versions of MySQL and
haven't had any problems with the MySQL installer for any OS.
If you have any other questions
currently experiencing a few problems getting mysql to run on my PC. I
have a windows XP machine P4 2.6 GHZ. I have downloaded the version mysql
4.0.15 from mysql.com and am struggling to get the program to work. I have follwed
the instructions listed in a book i purchased called mysql made easy
Hi,
if I try a multi table update with a Berkley DB Table, where the updated
table is bigger than the joined table, the system produces the strange Got
error 22 from table handler error. The attached (zipped) script should
demonstrate the problem.
Gru,
Thomas Kathmann
[EMAIL PROTECTED]
Dear newsgroup readers,
I am doing research on spatial data and how it can be used in open source
systems like mySQL.
Can someone give me some outline of problems in handling spatial data in
mySQL.
Many THanks
Saurabh
_
Tired
Saurabh Data wrote:
I am doing research on spatial data and how it can be used in open
source systems like mySQL.
Can someone give me some outline of problems in handling spatial data in
mySQL.
If you are specifically looking for problems you might find the
following interesting:
http
At 18:23 +0100 2/27/04, Jochem van Dieten wrote:
Saurabh Data wrote:
I am doing research on spatial data and how it can be used in open
source systems like mySQL.
Can someone give me some outline of problems in handling spatial
data in mySQL.
If you are specifically looking for problems you
Duncan Hill [EMAIL PROTECTED] wrote:
Mysql version: 4.1.1
Platform: Linux, pre-compiled RPMs from mysql.com
My problem:
Right now, I use a routine that selects the IDs that haven't been seen, and
promptly does an insert into notifications_seen to flag that it has been
seen. This works
Eric Scuccimarra wrote:
Have one more question - indexing the relevant columns based on the
explain info has made all of our queries immensely faster.
But it appears that new rows are not automatically indexed. Does anyone
know about this and if they are not indexed how do I reindex the tables?
/default.asp)
eBuilt, Inc. (http://www.eBuilt.com)
Date: Thu, 26 Feb 2004 18:40:57 +0100
To: [EMAIL PROTECTED]
From: =?iso-8859-1?Q?=22Carl_Sch=E9le=2C_IT=2C_Posten=22?=
[EMAIL PROTECTED]
Subject: Problems connecting to MySQL with WLS
Message-ID:
[EMAIL PROTECTED
On 26 Feb 2004 at 13:22, Eric Scuccimarra wrote:
But it appears that new rows are not automatically indexed. Does
anyone know about this and if they are not indexed how do I reindex
the tables?
You're misunderstanding something. When you create an index, all the
rows in the table are
Have one more question - indexing the relevant columns based on the explain
info has made all of our queries immensely faster.
But it appears that new rows are not automatically indexed. Does anyone
know about this and if they are not indexed how do I reindex the tables?
Thanks.
--
MySQL
Hello!
I'm using a WLS server and MySQL. Where am I supposed to put the
mysql-connector-java-3.0.11-stable-bin.jar to make sure WLS will find it? I've tried
several places ie. under ttk and right under classes. Still WLS doesn't find my
mysql.jar file. It works when I'm compiling it locally
For anyone who is interested the thing that worked and brought the query
down from 8 minutes to 5 seconds was separating out the JOIN to remove the
OR. I made it into two queries and UNIONed them together and it all works
beautifully now.
Thanks.
At 02:33 PM 2/25/2004 -0800, Daniel Clark
I am doing a very simple query joining two copies of tables with identical
structures but different data. We are running MySQL 4.1.1.
The tables each have about 24,000 lines of data in them. For some reason
this query, which is a simple join between the two tables is taking 8
minutes to run.
I am doing a very simple query joining two copies of tables with identical
structures but different data. We are running MySQL 4.1.1.
The tables each have about 24,000 lines of data in them. For some reason
this query, which is a simple join between the two tables is taking 8
minutes to run.
What does the explain look like?
-Original Message-
From: Eric Scuccimarra [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 25, 2004 1:03 PM
To: [EMAIL PROTECTED]
Subject: Query Problems
I am doing a very simple query joining two copies of tables with identical
structures
Do you have separate indexes on:
Table1.ID
Table2.ID
Table1.Field1
Table2.Field1
Table1.Field1
Table1.Field2
Select*
FROM Table1 as a
INNER JOIN Table2 as b ON (a.ID = b.ID or (a.Field1 = b.Field1 and
a.Field2 = b.Field2))
WHERE bla bla bla
We have
On 25 Feb 2004 at 13:09, Eric Scuccimarra wrote:
Select*
FROM Table1 as a
INNER JOIN Table2 as b ON (a.ID = b.ID or (a.Field1 = b.Field1 and
a.Field2 = b.Field2)) WHERE bla bla bla
It's hard to know without seeing the indexes and the full WHERE
clause, but part of the
No, we tried individual indexes and then one big grouped index but not
individual indexes on each of the fields. Adding the index actually added a
few seconds to the query so we weren't sure if that was the way to go.
I'll try this, though.
Eric
At 10:36 AM 2/25/2004 -0800, Daniel Clark
I know Oracle likes the indexes separatly, but mySQL might like combinations.
No, we tried individual indexes and then one big grouped index but not
individual indexes on each of the fields. Adding the index actually
added a few seconds to the query so we weren't sure if that was the way
to
Tried to make the indexes separate and did an EXPLAIN and no performance
increase and this is what the explain says:
id select_type table typepossible_keys
key key_len ref rowsExtra
1 SIMPLE tb ALL PRIMARY,tb_ndx3,tb_ndx4,tb_ndx5
NULL
Scott Purcell [EMAIL PROTECTED] wrote:
I am trying to create some tables that I can use the delete on cascade =
function for. This would help me code the project and ensure data =
integrity. I am on the docs @ =
http://www.mysql.com/doc/en/InnoDB_foreign_key_constraints.html but I am =
not
Mysql version: 4.1.1
Platform: Linux, pre-compiled RPMs from mysql.com
Table 1:
CREATE TABLE `notifications` (
`recid` int(11) NOT NULL auto_increment,
`recdate` datetime NOT NULL default '-00-00 00:00:00',
`expiry` datetime default NULL,
`notify_title` varchar(150) default NULL,
FOUND !
After many days to search the solution we found it ! The problem is solved
in the computer runnig microsoft jet 3.5.
You must change to 0 then values in the registry key
hkey_local\machine\software\microsoft\jet\3.5\engines\odbc; se the dword
ConnectionTimeout. The default is 600
Duncan Hill [EMAIL PROTECTED] wrote:
Mysql version: 4.1.1
Platform: Linux, pre-compiled RPMs from mysql.com
[skip]
My problem:
Right now, I use a routine that selects the IDs that haven't been seen, and
promptly does an insert into notifications_seen to flag that it has been
seen.
Hello,
I am trying to create some tables that I can use the delete on cascade function for.
This would help me code the project and ensure data integrity. I am on the docs @
http://www.mysql.com/doc/en/InnoDB_foreign_key_constraints.html but I am not quite
understanding the syntax.
I am
(undefined)
[Other attributes]
isCMP1_x: false (not CMP1.x)
isJMS: false (not JMS)
Has anyone else had problems with WAS 5.1 access MySql
datasources?
Tom
[EMAIL PROTECTED]
=
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
-sharing-scope:true (SHAREABLE)
res-resolution-control: 999 (undefined)
[Other attributes]
isCMP1_x: false (not CMP1.x)
isJMS: false (not JMS)
Has anyone else had problems with WAS 5.1 access MySql
datasources?
Tom,
Make sure you're using
had problems with WAS 5.1 access
MySql
datasources?
Tom,
Make sure you're using
'com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource'
as the
datasource implementation class that you plug into
your WebSphere
config...For some reason, no other classes (Driver
or plain DataSource
Hi,
I have been backing up via the dubious method of copying the database data folder onto
another machine where it is properly backed up onto DLT.
(yes, I know I should have used mysqldump!)
Recovering some tables today I copied the files back into their position (including the ibdata1 file and
then went to CPAN to get DBI, DBD::mysql, and CGI, but ran into many
problems, mostly with make test and make install.
Any advice would be truly appreciated. Should I not use CPAN?
Here's briefly the log (Note : I cancelled a previous cpan install
Bundle::CPAN with Ctrl-C, which probably caused
On Jan 31, 2004, at 1:09 AM, Adam Goldstein wrote:
On Jan 30, 2004, at 10:25 AM, Bruce Dembecki wrote:
On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful
stuff:
So.. My tips for you:
1) Consider a switch to InnoDB, the performance hit was dramatic,
and it's
about SO much more
I am getting the following log entry when ever I try to connect to mysql
from the network.
mysqld[2857]: refused connect from 192.168.1.28
This is on an SuSE 9.0 linux box and I haven't enabled any firewall yet
so I don't know why this would be happening. Any ideas?
Chris W
--
MySQL General
I have built a web site and I am testing it locally on my PC. Testing
through Internet Explorer is awfully slow and most of the time I am
getting error 'ASP 0113' script timed out. The table I am calling
records from is quite text heavy (a few hundred to a 1,000+ words per
field in some
seconds is a lifetime). You
may not code the queries yourself, but you can identify the queries that
are causing problems and from there you can advise the client on changes
to the database structure (indexes etc) or at least tell him exactly what the
problem queries are.
The slow log has helped
can't address the application side, but MySQL
shouldn't be
the cause of problems with this sort of hardware and server load. On
the
rest of the stats my comments are the number of queries aren't that
high,
this server should be more than enough properly tuned
Number of connections seems high
I had posted a message earlier this week about my 'Left Join' taking too
long to run. This seems to be happening on all of my queries that have
a 'Left Join'. Does anyone have any suggestions on why this would
happen?
Here is one query which took 45.72 sec to run:
SELECT
messages where it
would take too long to decipher the question--I'd assume that other
people do the same.
HTH
Bill
Date: Thu, 29 Jan 2004 08:03:25 -0800
From: Jacque Scott [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: More Left Join problems
I had posted a message earlier this week about my
to decipher the question--I'd assume that other
people do the same.
HTH
Bill
Date: Thu, 29 Jan 2004 08:03:25 -0800
From: Jacque Scott [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: More Left Join problems
I had posted a message earlier this week about my 'Left Join' taking
too
long
to use.
- Original Message -
From: Jacque Scott
To: [EMAIL PROTECTED] ; [EMAIL PROTECTED]
Sent: Thursday, January 29, 2004 2:18 PM
Subject: Re: More Left Join problems
Thanks for your time. I didn't think of formatting the query. Here is
the query in a more readable format. I
problems
Thanks for your time. I didn't think of formatting the query. Here is
the query in a more readable format. I have also taken out most of the
columns in the SELECT. The query still takes 44 seconds.
SELECT Products.NSIPartNumber, If(Bom.ProductID Is Not Null,x,) AS
BOM, Products.Obsolete
that it costs something to maintain the index, too. (Time to look for a book on SQL
that talks about such things...)
- Original Message -
From: Jacque Scott
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, January 29, 2004 6:28 PM
Subject: Re: More Left Join problems
as a slow query, the default 10 seconds is a lifetime). You may
not code the queries yourself, but you can identify the queries that are
causing problems and from there you can advise the client on changes to the
database structure (indexes etc) or at least tell him exactly what the
problem queries
Problems on a G5/OSX/MySql4.0.17
I don't think there would be any benefit to using
InnoDB, at least not
from a transaction point of view
For the longest time I was reading the books and
listening to the experts
and all I was hearing is InnoDB is great because it
handles transactions.
Having
On 1/28/04 10:29 AM, stairwaymail-mysql at yahoo dot com wrote:
So should we always use InnoDB over BerkeleyBD? I was
under the impression Berkeley was faster and better at
handling transactions.
Dan
Eermm... That's outside my scope of expertise, my experiences have been
exclusively with
Raid 5 is just as common as any other raid in software, and on my other
boxes it does not present any problem at all... I have seen excellent
tests with raid5 in software, and many contest that software raid 5 on
a high powered system is faster than hardware raid 5 using the same
disks-- I
to qualify as a slow query, the default 10 seconds is a lifetime). You
may
not code the queries yourself, but you can identify the queries that
are
causing problems and from there you can advise the client on changes
to the
database structure (indexes etc) or at least tell him exactly what the
problem
I don't think there would be any benefit to using InnoDB, at least not
from a transaction support view.
After your nightly optimize/repair are you also doing a flush? That may
help.
I haven't seen any direct comparisons between HFS+ and file systems
supported by Linux. I would believe that
I have managed to get what looks like 2G for the process, but, it does
not want to do a key_buffer of that size
I gave it a Key_buffer of 768M and a query cache of 1024M, and it seems
happier.. though, not noticeably faster.
[mysqld]
key_buffer = 768M
max_allowed_packet = 8M
I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the
load was even higher... The explanation for this could be that at high
IO rates the data is not 100% synced across the spindles, and therefore
smaller files (ie files smaller than the chunk size on each physical
disk) must
I cannot seem to allocate any large amounts of memory to Mysql on our
system...
Can anyone suggest any settings/changes/etc to get this running to the
best of it's ability?
Dual 2Ghz G5, 4G ram, OSX 10.3.2, 73G-10Krpm Sata Raptor drives
Using both the 'Complete Mysql4.0.15 and Standard binary
You may be hitting an OSX limit. While you can install more than 2GB on
a system, I don't think any one process is allowed to allocated more
than 2GB of RAM to itself. It's not a 64-bit OS yet. You should be able
to search the Apple website for this limit.
On Jan 26, 2004, at 6:10 AM, Adam
Others on this list have claimed to be able to set over 3G, and my
failure is with even less than 2G (though, I am unsure if there is a
combination of other memory settings working together to create an 2GB
situation combined)
Even at 1.6G, which seems to work (though, -not- why we got 4G of
2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 10.3
increased this to 4GB per-process. I've gotten MySQL running with 3GB
of RAM on the G5 previously.
This is an excerpt from a prior email to the list from back in October
when I was first testing MySQL on the G5:
Yes, I saw this port before... I am not sure why I cannot allocate more
ram on this box- It is a clean 10.3 install, with 10.3.2 update. I got
this box as I love OSX, and have always loved apple, but, this is not
working out great. Much less powerful (and less expensive) units can do
a better
Have you tried reworking your queries a bit? I try to avoid using IN
as much as possible. What does EXPLAIN say about how the long queries
are executed? If I have to match something against a lot of values, I
select the values into a HEAP table and then do a join. Especially if
YOU are going
The primary server (Dual Athlon) has several U160 scsi disks, 10K and
15K rpm... Approximately half the full size images are on one 73G U160,
the other half on another (about 120G of large images alone being
stored... I am trying to get him to abandon/archive old/unused images).
The
I have added these settings to my newer my.cnf, including replacing the
key_buffer=1600M with this 768M... It was a touch late today to see if
it has a big effect during the heavy load period (~3am to 4pm EST, site
has mostly european users)
I did not have any of these settings explicitly set
.
On the workstation, at any rate, it was enough to throw off the 'make'
process for MySQL, and it's not just me - I found quite a few references
on google to problems building sql_lex.cc. It must be some sort of
special case, but I'm afraid I just don't know what.
In any case, I've now been running
ISP, and my workstation is on ADSL behind a Netsys
router (the ADSL ISP uses PPPoE, don't know if that's relevant or not).
The server has RAID 1, and has always been 100% reliable (up since
2000). I have been using MySQL for over four years now, and have never
had any problems until recently
I have tried two ways of dumping data but it doesn't seem to be working.
One using the admin window with mysqldump dbname dumptest.sql but I
don't know if it has done anything because all it did was return to a
new blank line. I can't find anywhere a file named dumptest.sql
The other way was
been using MySQL for over four years now, and have never
had any problems until recently, when I tried using replication.
I wanted to mirror the database to my workstation over the DSL
connection. I got it working correctly, but quickly found that the slave
would just stop replicating if I went away
Mark van Herpen [EMAIL PROTECTED] wrote:
Sure,
SELECT ID, companyName, streetName, houseNo, postalCode, city, firstName,
lastName, debNo, houseNoExt , MATCH (companyName, streetName, city,
postalCode, lastName, firstName) AGAINST ('Mark -Nijmegen' IN BOOLEAN MODE)
AS score
FROM
Table's defined with ... content TEXT field, but when I issue the command
insert into content values ( 'test', '', 'index.html', 1.000,
'index.html','','', load_file('/home/projects/URCMS/test/index.html'),
'import', now(), 'test', 'import', 0,'');
either as script input, or from the sql
From the manual, check:
* the file must be on the server
* you must specify the full pathname to the file (ok, you did this)
* you must have the FILE privilege
* the file must be readable by all and be smaller than
max_allowed_packet
If the file doesn't exist or can't be read due to one of the
On Friday 09 January 2004 01:44 pm, Diana Soares wrote:
From the manual, check:
* the file must be on the server
* you must specify the full pathname to the file (ok, you did this)
* you must have the FILE privilege
Ahhh! Misunderstood that one. Got it, now. Thanks.
Still don't understand
mark wrote (1/9/2004):
On Friday 09 January 2004 01:44 pm, Diana Soares wrote:
From the manual, check:
* the file must be on the server
* you must specify the full pathname to the file (ok, you did this)
* you must have the FILE privilege
Ahhh! Misunderstood that one. Got it, now. Thanks.
I've having some strange problems with a fulltext search in boolean mode. The
'-' operator doesn't seems to work correctly. First let me show you my query:
SELECT ID, companyName, streetName, houseNo, postalCode, city, firstName,
lastName, debNo, houseNoExt , MATCH (companyName
Mark van Herpen [EMAIL PROTECTED] wrote:
I've having some strange problems with a fulltext search in boolean mode. The
'-' operator doesn't seems to work correctly. First let me show you my query:
SELECT ID, companyName, streetName, houseNo, postalCode, city, firstName
901 - 1000 of 2763 matches
Mail list logo