A few questions:
which is more or a problem: network outages, network capacity or query latency?
When you say near real-time do you need transactional consistent view on all
49 servers or can some lag be tolerated?
Can any one of the 49 local servers potentially update/delete the same rows or
means lots of writers
and single analyzing process I'd definitely use stored procs and have
the procs write to a job queue table for the analysis process.
Nigel
--
Nigel Wood
Plusnet BSS Architect
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe
On Fri, 2011-05-06 at 09:00 +0100, Rocio Gomez Escribano wrote:
Tables client an user are quite similar, but they don't have any
intersection, I mean, if somebody is a client, he or she cant be a user. So,
I have his or her driving license and I need to know what kind of person is.
Im trying
On Fri, 2011-05-06 at 11:12 +0100, Dhaval Jaiswal wrote:
Caused by: java.net.SocketException: Socket closed
I'd suggest you look at server side timeout and maximum connection
settings in
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html I'd
suspect wait_timeout is the setting
On Mon, 2010-09-27 at 11:25 +0100, Willy Mularto wrote:
Hi,
I work on MySQL 5 with PHP 5 and use Apache 2 as the webserver. I have a
simple query that searches matched row id and update the field via HTTP GET
query. On a low load it succeed update the row. But on high traffic sometimes
it
[
I have 2 tables. Table A containing 2 fields. A user ID and a picture ID =
A(uid,pid) and another table B, containing 3 fields. The picture ID, an
attribute ID and a value for that attribute = B(pid,aid,value).
Table B contains several rows for a single PID with various AIDs and values.
On Wed, 2010-06-16 at 08:59 +0100, Nigel Wood wrote:
I'd use:
drop temporary table if exists AttSearchMatches;
select pid as targetPid, count(*) as criteraMatched from B where
userId=35 and ( (b.aid=1 and b.value 50) OR (b.aid=3 and b.value
=4) ) group by pid having criteraMatched = 2;
drop
Consider the following concept,
~/index.php
#1. Fetch data from an external webpage using PHP Curl;
#2. Preg_match/Prepare Data to INSERT from local MySQL; - this may take a
few secs
#3. While Loop { INSERT data (from #2) into local MySQL } - this may take
only mili secs.
Marcus Bointon wrote:
For the most part this is write-only and is only ever read very rarely,
but when I do, it will be to retrieve the details of a single user, and
all I need is the whole history, not individual events.
For your stated requirements the filesystem is probably most
nigel wood wrote:
Here's a rough table stucture. The indexes in events tables would be
TargetId. But problably TargetId+EventDate probably eventId+event date
as you found more uses/added paging.
Well that didn't format very well :-( The tables structures are:
User/Actor
===
TargetId
Marco Bartz wrote:
I accidentally sent it before finishing...
I am looking for a way to do the following with a single query:
SELECT `ID`, `Name`, `Interface`,
(SELECT count(*) FROM CONCAT('listings_', `ID`) WHERE `Status`='Active') as
`activeListings`
FROM `sites`
I am querying the
Peter Brawley wrote:
Is there a way to total counts done in subqueries?
Select expression aliases can't be referenced at the same level. You
have to create another outer level ...
alternatively use variables:
mysql select @first := 1 as value1, @second := 2 as value2,
@fir...@second as
is probably
table contention.
Get a couple of 'show full processlist', 'show innodb status' query
outputs during the lockups and run vmstat 1 -S M in another terminal.
With the outputs from both you've something to work with.
Nigel Wood
--
MySQL General Mailing List
For list archives: http
(as a result of replication failure) we directed all the traffic
normally sent to the reporting server back to the master server adding a
1/3 to its load. Several areas of the websites got FASTER afterwards and
I'm currenlty at a loss to explain why.
Nigel Wood
--
MySQL General Mailing List
For list
Marvin Wright wrote:
I have 3 tables where I keep cache records, the structures are something
like
TableA is a 1 to many on TableB which is a 1 to many on TableC
To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.
My
to the list before attempting to the functions in
the online documentation so I'll simply confirm they exist and leave
you to: Do Your Own Research by Reading The Fine Manual.
Nigel Wood
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http
Michael DePhillips wrote:
Hi,
Does anyone have a clever way of returning; a requested value with
one value less than that value, and one value greater than that value
with one query.
For example T1 contains
ID
1234
1235
1236
1238
Assuming the id's are consecutive.
You want
Michael DePhillips wrote:
Hi Dan,
Thanks for the prompt reply,
As I described it yes, you are correct, however, the id may not always
be one(1) value away. So the number one needs, somehow, to be replaced
with a way to get the next largest value and the previous less
than value.
Sorry
Dan Trainor wrote:
Dan Trainor wrote:
Hi -
I would like to be able to replicate all queries from a live MySQL
server, to a testing server at the office.
The reason for doing this is to test load under [semi]real-world
conditions with the new server.
Hi -
So I was thinking about this
David T. Ashley wrote:
Nigel wrote:
mod_php will persist the MySQL connection holding open any lock or
syncronisation token obtained through any of the three methods :
begin/commit, lock/unlock tables or get_lock/release_lock. PHP does
ensure that even in the event of timeouts or fatal
David T. Ashley wrote:
Hi,
I'm doing a PHP application, and there are just a few instances where I need
to do atomic operations on more than one table at a time and I can't express
what I want to do as a single SQL statement.
What I'm trying to guard against, naturally, is race conditions
David T. Ashley wrote:
Nigel wrote:
If you can't or won't do this properly by using a transactional table
and begin/commit at least look at using get_lock() based guard
conditions which only lock a string leaving the database accessable.
Whatever you do if you client is php install a
David Godsey wrote:
I know, I know, sounds like something that should be done in the
presentation layer, howerver if possible, I would like to provide common
data presentation to multiple presentation layers (written in different
languages).
So is there anyway to return an array in mysql?
[EMAIL PROTECTED] wrote:
Paul Halliday [EMAIL PROTECTED] wrote on 14/03/2006 12:09:10:
As an example:
There was a table called event.
This table is now broken up like this:
event _sensor_date.
So for every sensor, and every day, there is now a new table. So if I
have 20 sensors, every
David T. Ashley wrote:
Hi,
I have several tables linked in various ways so that an inner join is
possible. However, at the same time and in the same SQL query, I'd also
like to query by some field values in one of the tables.
Two quick questions:
a)Will MySQL allow joins that involve more
Gyurasits Zoltán wrote:
Hi ALL!
Please help
DATA:
header_iditem_idquant
1110
1220
21100
22200
3120
3215
header is the moving type, and items is the items table.
If header.type_ is 1 then incoming move, if 2 outgoing
Scott Baker wrote:
Is there a way to tell mysqld to stop running a query if it hasn't
completed in over 60 seconds? Or how about stop running if the result
is greater 100,000 rows or something? I have a bone head user who
keeps querying two tables and NOT joining them, and then dragging the
Mark Phillips wrote:
Flights
+---+--+--+
| flight_id | data1_id | data2_id |
+---+--+--+
| 1 |1 |1 |
| 2 |1 |3 |
| 3 |1 |1 |
| 4 |2 |2 |
| 5 |
Mark Phillips wrote:
2. Generally, what is the most efficient way to do this? Is is better to
issue more queries that gather the calculated data or better to issue one
query for the raw data and then do the calculations in Java? I am sure there
are many factors that effect the answer to
Dotan Cohen wrote:
Hi all, I have a field in a mysql database v4.0.18 that contains a
linux timestamp. I have been googleing for a solution that would
return to me all the entries where the timestamp falls on, say a
wednesday, or between 2pm to 3pm. I am led to believe that it is
possible, but
On Thu, 03 Oct 2002, Andrew Braithwaite wrote:
Please can you post an explain of this query? As a first guess try:
alter table publicinfo add index location (x,y,class_code);
Nigel
-
Before posting, please check:
in
the first table will stay fairly constant so if I never run it will the
table/index space kept tracking the deleted rows locations eventually be reused
or will the table/index size grow constantly until one of the files hits the
operating system limit?
Many Thanks,
Nigel Wood
On Sat, 27 Apr 2002, Sam Minnee wrote:
I've been asked to put together a very large (well, it's large to me)
database, and while mySQL is great for my current uses, I haven't had
experience with stuff of this scale.
The database will have about 88 tables, with up to 100 fields per table.
On Tue, 16 Apr 2002, David Ayliffe wrote:
Are MySQL really going to give you details of their past security
'issues'?
Think about it. Try going underground and looking on some exploit
sites.
DA
Hi,
I'm working on security breaches in MySQL. Can someone guide me in this.
To be
On Wed, 20 Mar 2002, Steve Gardner wrote:
Hi All
Could someone explain to me why an index in a select I am doing is only
sometimes used.
mysql explain select * from mailstat where domain_id in(2);
+--+--+---+--+-+--+-+---
-+
|
On Tue, 19 Mar 2002, Horacio Lam wrote:
The folowing is a query I tried to do, to view reports that were
between two dates, the problem is that the reports that are made
on the second date, are not being display, if anyone can help me, please reply.
thanks
select ticket.t_id,
-to-resolution from a non-GPL db company?
--
| Nigel Wood
| PlusNet Technologies Ltd.
+ -- Internet Access Solutions @ http://www.plus.net -
-
Before posting, please check:
http://www.mysql.com/manual.php
I have a table in a MySQL database on a Solaris system which contains log
entries and have stupidly overflowed Solaris' 4GB file
limit. The table is unusable. isamchk reports:
error: 'log.ISD' is not a ISAM-table
I have tried making a truncated copy of the file and isamchk'ing the
On Wed, 04 Apr 2001, Gunnar von Boehn wrote:
Hello
My provider 11-Puretec (www.puretec.de)
hosting more than 1.000.000 domains
runs about 14 Databaseserver with MySQL 3.22.32-log
on Linux dual Penti-III 500Mhz machines.
In the last 6 month the average uptime of the mysql-servers was
On Wed, 28 Mar 2001, Aigars Grins wrote:
Hi,
The short version:
Does anyone know an automated way of keeping two MySQL db's (located on
different machines) in sync? A small window of non-sync could be acceptable.
The long version:
snip
The important question is how much availability
40 matches
Mail list logo