Im trying to run a full text query on a two letter keyword 'K7'. I have
set ft_min_word_len=2 and restarted the server and if I view the system
vars in Mysql Workbench it shows it is set correctly.
I have then dropped and re-created the index on the descrip column. It
is an InnoDB table so I c
A few questions:
which is more or a problem: network outages, network capacity or query latency?
When you say "near real-time" do you need transactional consistent view on all
49 servers or can some lag be tolerated?
Can any one of the 49 local servers potentially update/delete the same rows or
es are both reading and writing data and how many?
How quickly do you expect new entries to be added?
Will entries ever be deleted?
Do you need transactions?
What volume of working set data are we talking about?
If "The process needs to continuously determine" means lots of writers
and
On Fri, 2011-05-06 at 11:12 +0100, Dhaval Jaiswal wrote:
> Caused by: java.net.SocketException: Socket closed
I'd suggest you look at server side timeout and maximum connection
settings in
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html I'd
suspect wait_timeout is the setting y
On Fri, 2011-05-06 at 09:00 +0100, Rocio Gomez Escribano wrote:
> Tables "client" an "user" are quite similar, but they don't have any
> intersection, I mean, if somebody is a client, he or she cant be a user. So,
> I have his or her driving license and I need to know what kind of person is.
>
> I
On Mon, 2010-09-27 at 11:25 +0100, Willy Mularto wrote:
> Hi,
> I work on MySQL 5 with PHP 5 and use Apache 2 as the webserver. I have a
> simple query that searches matched row id and update the field via HTTP GET
> query. On a low load it succeed update the row. But on high traffic sometimes
On Wed, 2010-06-16 at 08:59 +0100, Nigel Wood wrote:
> I'd use:
> drop temporary table if exists AttSearchMatches;
> select pid as targetPid, count(*) as criteraMatched from B where
> userId=35 and ( (b.aid=1 and b.value >50) OR (b.aid=3 and b.value
> =4) ) group by pid ha
[
> I have 2 tables. Table A containing 2 fields. A user ID and a picture ID =>
> A(uid,pid) and another table B, containing 3 fields. The picture ID, an
> attribute ID and a value for that attribute => B(pid,aid,value).
>
> Table B contains several rows for a single PID with various AIDs and value
> > Consider the following concept,
> >
> > ~/index.php
> >
> > #1. Fetch data from an external webpage using PHP Curl;
> > #2. Preg_match/Prepare Data to INSERT from local MySQL; - this may take a
> > few secs
> > #3. While Loop { INSERT data (from #2) into local MySQL } - this may take
> > only m
nigel wood wrote:
Here's a rough table stucture. The indexes in events tables would be
TargetId. But problably TargetId+EventDate probably eventId+event date
as you found more uses/added paging.
Well that didn't format very well :-( The tables structures are:
User/Actor
==
Marcus Bointon wrote:
"For the most part this is write-only and is only ever read very rarely,
but when I do, it will be to retrieve the details of a single user, and
all I need is the whole history, not individual events."
For your stated requirements the filesystem is probably most efficien
Marco Bartz wrote:
I accidentally sent it before finishing...
I am looking for a way to do the following with a single query:
SELECT `ID`, `Name`, `Interface`,
(SELECT count(*) FROM CONCAT('listings_', `ID`) WHERE `Status`='Active') as
`activeListings`
FROM `sites`
I am querying the
Peter Brawley wrote:
> Is there a way to total counts done in subqueries?
Select expression aliases can't be referenced at the same level. You
have to create another outer level ...
alternatively use variables:
mysql> select @first := 1 as value1, @second := 2 as value2,
@fir...@second as
Hi all,
Apologies if this isn't the correct list but I couldn't see a more
suitable one.
I have 4 tables. t1 and t3 are a many to many relationship and use t2
as link table. t3 has many t4.
What I want to do is insert a new row into t3 for each row in t1. I
then want to add the correspo
s? if not the source is probably
table contention.
Get a couple of 'show full processlist', 'show innodb status' query
outputs during the lockups and run vmstat 1 -S M in another terminal.
With the outputs from both you've something to work with.
Nigel Wood
--
MySQL
result of replication failure) we directed all the traffic
normally sent to the reporting server back to the master server adding a
1/3 to its load. Several areas of the websites got FASTER afterwards and
I'm currenlty at a loss to explain why.
Nigel Wood
--
MySQL General Mailing List
For
Marvin Wright wrote:
I have 3 tables where I keep cache records, the structures are something
like
TableA is a 1 to many on TableB which is a 1 to many on TableC
To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.
My expir
suspect you posted to the list before attempting to the functions in
the online documentation so I'll simply confirm they exist and leave
you to: Do Your Own Research by Reading The Fine Manual.
Nigel Wood
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
Michael DePhillips wrote:
Hi Dan,
Thanks for the prompt reply,
As I described it yes, you are correct, however, the id may not always
be one(1) value away. So the number one needs, somehow, to be replaced
with a way to get the "next largest value " and the "previous less
than" value.
Sorr
Michael DePhillips wrote:
Hi,
Does anyone have a clever way of returning; a requested value with
one value less than that value, and one value greater than that value
with one query.
For example T1 contains
ID
1234
1235
1236
1238
Assuming the id's are consecutive.
You want surounding
Hello everyone!
I've got a few questions regarding optimizing self-joins.
So I've got these three tables:
mysql> describe FieldName;
+-++--+-+-++
| Field | Type | Null | Key | Default | Extra |
+-+--
Dan Trainor wrote:
Dan Trainor wrote:
Hi -
I would like to be able to replicate all queries from a live MySQL
server, to a testing server at the office.
The reason for doing this is to test load under [semi]real-world
conditions with the new server.
Hi -
So I was thinking about this m
David T. Ashley wrote:
Nigel wrote:
mod_php will persist the MySQL connection holding open any lock or
syncronisation token obtained through any of the three methods :
begin/commit, lock/unlock tables or get_lock/release_lock. PHP does
ensure that even in the event of timeouts or fatal e
David T. Ashley wrote:
Nigel wrote:
If you can't or won't do this properly by using a transactional table
and begin/commit at least look at using get_lock() based guard
conditions which only lock a string leaving the database accessable.
Whatever you do if you client is php install a shutdo
David T. Ashley wrote:
Hi,
I'm doing a PHP application, and there are just a few instances where I need
to do atomic operations on more than one table at a time and I can't express
what I want to do as a single SQL statement.
What I'm trying to guard against, naturally, is race conditions when
David Godsey wrote:
I know, I know, sounds like something that should be done in the
presentation layer, howerver if possible, I would like to provide common
data presentation to multiple presentation layers (written in different
languages).
So is there anyway to return an array in mysql?
Y
[EMAIL PROTECTED] wrote:
"Paul Halliday" <[EMAIL PROTECTED]> wrote on 14/03/2006 12:09:10:
As an example:
There was a table called event.
This table is now broken up like this:
event __.
So for every sensor, and every day, there is now a new table. So if I
have 20 sensors, every day I
David T. Ashley wrote:
Hi,
I have several tables linked in various ways so that an inner join is
possible. However, at the same time and in the same SQL query, I'd also
like to query by some field values in one of the tables.
Two quick questions:
a)Will MySQL allow joins that involve more th
Gyurasits Zoltán wrote:
Hi ALL!
Please help
DATA:
header_iditem_idquant
1110
1220
21100
22200
3120
3215
"header" is the moving type, and "items" is the items table.
If header.type_ is "1" then incoming move, if "2" o
Scott Baker wrote:
Is there a way to tell mysqld to stop running a query if it hasn't
completed in over 60 seconds? Or how about stop running if the result
is greater 100,000 rows or something? I have a bone head user who
keeps querying two tables and NOT joining them, and then dragging the
D
Mark Phillips wrote:
2. Generally, what is the most "efficient" way to do this? Is is better to
issue more queries that gather the "calculated data" or better to issue one
query for the raw data and then do the calculations in Java? I am sure there
are many factors that effect the answer to
Mark Phillips wrote:
Flights
+---+--+--+
| flight_id | data1_id | data2_id |
+---+--+--+
| 1 |1 |1 |
| 2 |1 |3 |
| 3 |1 |1 |
| 4 |2 |2 |
| 5 |
Do you have any log messages associated with it?
(not sure for win / os x for linux look in /var/lib/mysql/.err)
Hi.
For clarity, I'm running mysql 4.0.20
And I did start the mysql daemon.
Luke Vanderfluit wrote:
Hi.
I have a database that is used with wordpress blogging software.
Yesterda
Dotan Cohen wrote:
Hi all, I have a field in a mysql database v4.0.18 that contains a
linux timestamp. I have been googleing for a solution that would
return to me all the entries where the timestamp falls on, say a
wednesday, or between 2pm to 3pm. I am led to believe that it is
possible, but I
Hi
I have a number of clients connecting to a DB in order to take jobs off a
queue, mark them active, then run them. In pseudo code, each client executes
the following sequence of queries:
a-- select test_id from tests where status=1 and priority < 11 order by
priority
b-- update tests set st
one know if its possible to reconfigure the master to
observe the old replication setup? Or do I need to look at upgrading
the slave to version 4 (which leads to the next question: does anyone
fancy my chances of getting mysql 4 built and running under redhat 7.1
:) ??)
Cheers
Tim
--
Tim Wood
P
On Thu, 03 Oct 2002, Andrew Braithwaite wrote:
Please can you post an explain of this query? As a first guess try:
alter table publicinfo add index location (x,y,class_code);
Nigel
-
Before posting, please check:
http://www
un it
on the 'most recent' table after each daily delete? The number of rows in
the first table will stay fairly constant so if I never run it will the
table/index space kept tracking the deleted rows locations eventually be reused
or will the table/index size grow c
Spam is such an ugly word. We believe that the attached piece, while not
directly concerned with MySQL directly, may be of general interset to the
list subscribers.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 13, 2002 12:08 PM
To: Andy Wood
On Sat, 27 Apr 2002, Sam Minnee wrote:
> I've been asked to put together a very large (well, it's large to me)
> database, and while mySQL is great for my current uses, I haven't had
> experience with stuff of this scale.
>
> The database will have about 88 tables, with up to 100 fields per table
On Tue, 16 Apr 2002, David Ayliffe wrote:
> Are MySQL really going to give you details of their past security
> 'issues'?
>
> Think about it. Try going underground and looking on some exploit
> sites.
>
> DA
>
>
> >
> Hi,
> I'm working on security breaches in MySQL. Can someone guide me i
On Wed, 20 Mar 2002, Steve Gardner wrote:
> Hi All
>
> Could someone explain to me why an index in a select I am doing is only
> sometimes used.
> mysql> explain select * from mailstat where domain_id in(2);
> +--+--+---+--+-+--+-+---
> -+
On Tue, 19 Mar 2002, Horacio Lam wrote:
> The folowing is a query I tried to do, to view reports that were
> between two dates, the problem is that the reports that are made
> on the second date, are not being display, if anyone can help me, please reply.
> thanks
>
>
> select ticket.t_id, t_s
, othervalue
1,4
3,4
I basically want to know if there's a query that return's 2 and 4 as
key_id's in A but missing from B?
Any sql gurus out there?
Cheers
Tim
--
Tim Wood
Predictive Technologies
ph +61 3 8344 0395 (BH) +61 413 845 317
Everyone talks about apathy, but no one does
said they have no problem importing
those same tables to their databases, so I'm wondering if it has to do with
something that the hosting service has configured improperly or what.
I'd appreciate any help to get this resolved.
Thank
e MyFile.sql
Do I have to put the .sql file in some particular directory or can I
specify a path to it? How do I know what path to specify?
Thanks in advance...
Michael Wood
-
Before posting, please check:
http://www.my
Hi Peter
We use the mysql replication stuff here. Its very reliable, and the
synchronization between master & slave is pretty much instant.
There does seem to be one problem when the master rotates its transaction
logs, the slave does not pick it up and you need to manually issue "reset
slave"
> In the last episode (Dec 13), Tim Wood said:
> > Does anyone out there know of any
> > - future plans by the mysql development crew to increase table size
> > limits by eg using their own custom filesystem type?
>
> You mean the MyISAM RAID table extension, or the
he 4G limit imposed
by the linux kernel
- any alternative filesystems (JFS?) that might permit greater table
sizes under mysql
- any other tested and functioning workarounds to this issue?
Any suggestions will be appreciated
Cheers
Tim
--
Tim Wood
Predictive Technologies
ph +61 3 8344 0395 (BH) +61
The problem was in fact an illegal column name in one of the tables that a
Content Management program tried to create. (phpWebSite) The program seems
to be well done, but somehow they missed a beat on one of the modules that
it creates (or doesn't because of the illegal name) I tracked it down
ror in your SQL syntax near 'data
LONGTEXT NULL,
PRIMARY KEY (id)
)' at line 4
Any help solving this would be appreciated.
RW Wood
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
ID instead of the name to the set_user
function in the file sql/mysqld.cc.
>Submitter-Id:
>Originator:Kelvin Wood
>Organization:
none
>MySQL support: none
>Synopsis: getpwent for --user fails after chroot()
>Severity: non-critical
>Priority: low
>
to comment on their average
time-to-resolution from a non-GPL db company?
--
| Nigel Wood
| PlusNet Technologies Ltd.
+ -- Internet Access Solutions @ http://www.plus.net -
-
Before posting, please check:
http
I have a table in a MySQL database on a Solaris system which contains log
entries and have stupidly overflowed Solaris' 4GB file
limit. The table is unusable. isamchk reports:
error: 'log.ISD' is not a ISAM-table
I have tried making a truncated copy of the file and isamchk'ing the
shorted
On Wed, 04 Apr 2001, Gunnar von Boehn wrote:
> Hello
>
>
> My provider 1&1-Puretec (www.puretec.de)
> hosting more than 1.000.000 domains
> runs about 14 Databaseserver with MySQL 3.22.32-log
> on Linux dual Penti-III 500Mhz machines.
>
> In the last 6 month the average uptime of the mysql-serv
On Mon, 02 Apr 2001, Richard Ellerbrock wrote:
> From the manual:
>
> If you are using FOR UPDATE on a table handler with page/row locks, the examined
>rows will be write locked.
>
> I agree that this does not tell me much. When are the rows unlocked?
>
educated guess
select for update
On Wed, 28 Mar 2001, Aigars Grins wrote:
> Hi,
>
> The short version:
>
> Does anyone know an automated way of keeping two MySQL db's (located on
> different machines) in sync? A small window of non-sync could be acceptable.
>
> The long version:
The important question is how much availabili
Hi all,
Here are two tables I'm working with. I apologize if you
are not using a monospaced font and they are messed up.
This is used by a gradebook program. When a professor adds an assignment to
a class he teaches, it inserts the information about the assignment into the
assignments table,
> I want to ask you :
> 1- How many rows does a mysql table can hold?
> 2- I have to design a database for all universities of my country , Is it
> better to consider one database for each university or one database for
all
> universities . Note that a query on a table "student" would be more qui
Hi Warren,
What I personally would do is simply include some sort of 'ID' field. In
other words, each question would have a unique ID. Question 1's ID would be
1, Question 2's ID would be 2, etc. Or however you wanted to number them.
You could even set this up as an auto_increment and have t
Hi all,
Does anyone know if mod_auth_mysql is still under active development, and if
so, where can I find current information on it? I have searched the list
archives here and indeed turned up 254 instances of people having problems
with mod_auth_mysql. From the sounds of it, it doesn't look
61 matches
Mail list logo