RE: Architectural Help

2012-05-24 Thread Nigel Wood
A few questions:
which is more or a problem: network outages, network capacity or query latency? 
When you say near real-time do you need transactional consistent view on all 
49 servers or can some lag be tolerated?
Can any one of the 49 local servers potentially update/delete the same rows or 
data?
Is there any natural segmentation point within the data? 
Do the data centers have diverse networks so that connections to some data 
centers may remain when others? 
In the event that a local data centre is totally isolated from the others what 
data should it be allowed to update?
Do your applications produce/examine  large data set querying by secondary keys 
or using tull text search?
Are you in a position to modify the applications?
__
From: Anupam Karmarkar [sb_akarmar...@yahoo.com]
Sent: Thursday, May 24, 2012 10:17 AM
To: mysql@lists.mysql.com
Subject: Architectural Help

Hi All,


I need architectural help for our requirement,


We have nearly 50 data centre through out different cities from these data 
center application connect to central database server currently, there are 
conectivity and nework flcutions issues for different data center, so we comeup 
with solution each data center we should have local database server which will 
keep syncing with other server so that application doesnt fail , User data can 
be updated in any of server and should reflect in every server.  Application 
consists of write/read/delete operations,


Current writes each day central server 1million.


Only 1/1000 need to be distrubuted acrross servce rest need to be in central 
server.


How can we achive this ? solution needs very much real time data accepting 
nework lags.


Solution

Collect all changes in other 49 server into 1 central server(How can we collect 
data)


49 keeps updating data into local database from central server(Using Repliation 
Can be done)



--Anupam

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql guru??

2012-01-20 Thread nigel wood
On 20/01/12 11:54, bruce wrote:
 Hi.
 
 Got a major pain that I'm trying to solve using mysql.
 
 Trying to handle a hierarchical tree structure, where I have a
 parent/child structure that grows as data is added to the system.
 
 The process needs to continuously determine if the overall tree, and
 all the associated nodes/leafs have completed so not only is the
 tree growing, but data for the given node/leaf is also changing,
 
 The system is comprised of a parent app which spawns descendant apps
 that in turn can spawn descendant apps, and so on..
 
 The system is represented in mysql as a parent/child tree, where each
 spawned app has an ID, as well as a status for the completion status
 of the app.
 
 I'm trying to find someone I can talk to regarding this, so I can get
 clarity on how this can be implemented.
 
 The process needs to be able to:
 -update the tree tbl with updated data from the running apps
 -update the tbl with new nodes/leafs as the spawned apps are created
 -quickly return 0/1 if the descendants of a node have been complete
 
 I've created a few different tbl defs, and played with a few different
 approaches, but haven't got this right yet. I've looked at a number of
 different articles covering hierarchical, adjacency models, closures,
 etc...
 
 **The nested soln isn't applicable to the project, as the data/tree
 tbl is continually growing, which would require a complete rebuilding
 of the nested tbls, which would impose a computational/time hit on the
 process.
 
 I can provide the sample tbl defs/data that I'm using, as well as more
 data on what I'm trying to accomplish.
 
 So, if you're skilled in this area, let's talk.
 
 Thanks
 
 -bruce
 
I'd agree with the general observation that  your problem might not be
suited for a relational database. If you've a good reason for using one
I'd go for the materialised path
(http://en.wikipedia.org/wiki/Materialized_path) or stored proc with
recursion approaches myself. I've used all the various approaches over
my career.

Which processes are writing data and how many?
Which processes are reading data and how many?
Which processes are both reading and writing data and how many?
How quickly do you expect new entries to be added?
Will entries ever be deleted?
Do you need transactions?
What volume of working set data are we talking about?

If The process needs to continuously determine means lots of writers
and single analyzing process I'd definitely use stored procs and have
the procs write to a job queue table for the analysis process.

 Nigel

-- 
Nigel Wood
Plusnet BSS Architect


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: RV: independent tables

2011-05-06 Thread Nigel Wood
On Fri, 2011-05-06 at 09:00 +0100, Rocio Gomez Escribano wrote:
 Tables client an user are quite similar, but they don't have any
 intersection, I mean, if somebody is a client, he or she cant be a user. So,
 I have his or her driving license and I need to know what kind of person is.
 
 Im trying some join left, right, but I'm unable to get it!!
 

OK, a couple of questions:

Are you absolutely sure the two sets of people are mutually exclusive?
Why use two separate user and client tables rather than one person
table with a typeId which foreign keys on to a personType table? Is the
output of two columns named PersonType,PersonId acceptable for this
query?  

Assuming your design is correct as it stands or fixed in its current
state you can achieve what you want with a join.

select null as userID, clientID 
from client 
where clientCodeDrivingLicense = 321321321
UNION
select userId, null as clientID 
from user 
where userCodeDrivingLicense = 321321321

With regard to the performance of this system over time I'd suggest you
want a unique index on the DrivingLicense column/columns.

Hope that helps,

Nigel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Fwd: DBCP error

2011-05-06 Thread Nigel Wood
On Fri, 2011-05-06 at 11:12 +0100, Dhaval Jaiswal wrote:
 Caused by: java.net.SocketException: Socket closed

I'd suggest you look at server side timeout and maximum connection
settings in
http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html I'd
suspect wait_timeout is the setting you're interested in. You might also
want to check the maximum TCP session/idle timeouts on firewalls,
routers and other network devices between the failing system and its
MySQL server.

Hope that helps,
Nigel


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Update Table

2010-09-27 Thread Nigel Wood
On Mon, 2010-09-27 at 11:25 +0100, Willy Mularto wrote:
 Hi,
 I work on MySQL 5 with PHP 5 and use Apache 2 as the webserver. I have a 
 simple query that searches matched row id and update the field via  HTTP GET 
 query. On a low load it succeed update the row. But on high traffic sometimes 
 it failed to update some  rows. No errors return by the script. What usually 
 cause this and how to solve this problem? Thanks
 
 
1.) Are you sure the script is executed under those conditions? Is
Apache refusing the request because to many children have been forked?

2.) Are you sure the script will report if MySQL fails with too many
connections?

 
 sangprabv
 sangpr...@gmail.com
 http://www.petitiononline.com/froyo/
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: [PHP] SQL Syntax

2010-06-16 Thread Nigel Wood
[
 I have 2 tables. Table A containing 2 fields. A user ID and a picture ID =
 A(uid,pid) and another table B, containing 3 fields. The picture ID, an
 attribute ID and a value for that attribute = B(pid,aid,value).

 Table B contains several rows for a single PID with various AIDs and values.
 Each AID is unique to a PID.  (e.g. AID = 1 always holding the value for the
 image size and AID = 3 always holding a value for the image type)

 The goal is now to join table A on table B using pid, and selecting the rows
 based on MULTIPLE  attributes.

 So the result should only contain rows for images, that relate to an
 attribute ID = 1 (size) that is bigger than 100 AND!!! an attribute ID =
 5 that equals 'jpg'.


snip
 I appreciate your thoughts on this.

My first thought is that you're going to endup with some very
inefficient queries or come unstuck with that table schema the first
time you have an attributes of different types. What happens if
attribute 1 is dateTaken has the type date, attribute 2 is authorName
with the type string and attribute 3 is an aspect ratio N:n?

My second thought is to make sure you have a unique index on (pid,aid) in table 
b.

Sticking to the question you asked. Lets assume the search for this run
of the search query is owned by userId 35 and two attribute clauses:
has attribute 1  50 and attribute 3 = 4

I'd use:
drop temporary table if exists AttSearchMatches;
select pid as targetPid, count(*) as criteraMatched from B where userId=35 and 
( (b.aid=1 and b.value 50) OR (b.aid=3 and b.value =4) ) group by pid having 
criteraMatched = 2;
drop temporary table if exists AttSearchMatches;
select fields you want from criteraMatched cm on cm. inner join A on 
a.pid=criteraMatched.pid;
drop temporary table AttSearchMatches;

For best performance specify the temp table structure explicitly and
add an index to pid.  You could do this with a single query containing a
sub-query rather than temporary tables but I've been bitten by
sub-query performance before.

Hope that helps,

Nigel


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: [PHP] SQL Syntax

2010-06-16 Thread Nigel Wood
On Wed, 2010-06-16 at 08:59 +0100, Nigel Wood wrote:
 I'd use:
 drop temporary table if exists AttSearchMatches;
 select pid as targetPid, count(*) as criteraMatched from B where
 userId=35 and ( (b.aid=1 and b.value 50) OR (b.aid=3 and b.value
 =4) ) group by pid having criteraMatched = 2;
 drop temporary table if exists AttSearchMatches;
 select fields you want from criteraMatched cm on cm. inner join A on
 a.pid=criteraMatched.pid;
 drop temporary table AttSearchMatches; 

Nope :-) Without the silly errors I'd use:

drop temporary table if exists AttSearchMatches;
select pid, count(*) as criteraMatched from B where b.userId=35 and
( (b.aid=1 and b.value 50) OR (b.aid=3 and b.value =4) ) group by b.pid
having criteraMatched = 2;
select a.fields you want from AttSearchMatches asm inner join A on
a.pid=asm.pid;
drop temporary table AttSearchMatches;

Sorry,
Nigel


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: What if the user closes the browser while INSERT INTO MySQL? (PHP/MySQL)

2010-04-19 Thread Nigel Wood
  Consider the following concept,
 
  ~/index.php
 
  #1. Fetch data from an external webpage using PHP Curl;
  #2. Preg_match/Prepare Data to INSERT from local MySQL; - this may take a
  few secs
  #3. While Loop { INSERT data (from #2) into local MySQL } - this may take
  only mili secs.
 
  Suppose this code is run by a random user (say, my website visitor), and
  he/she closes the browser while the code was running. The real problem is
  when the browser is closed while #3 is executing. 
snip
 The server does not know if the browser is closed or not (or if the network 
 connection is losted). It will continue to execute the code until finnished.

I'm not sure that's exactly correct. The default behaviour is however
for your script to be aborted when the remote client disconnects.
http://www.php.net/manual/en/features.connection-handling.php 

You can certainly set a script not to terminate if the user aborts
during the page request using ignore_user_abort() or a php.ini setting.

HTH 

Nigel

P.S. Sorry to the other list users for a PHP oriented discussion.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best approach for DB-based event logging?

2009-07-03 Thread nigel wood

Marcus Bointon wrote:

For the most part this is write-only and is only ever read very rarely, 
but when I do, it will be to retrieve the details of a single user, and 
all I need is the whole history, not individual events.


For your stated requirements the filesystem is probably most efficient.  
It does sound odd from a data retention/archiving and  data retrieval 
times point of view though.  Will the earliest log data be held on line 
and constantly available literally forever?  Will you still want to pull 
back the whole list when a user's history hits 50mb? Would this data 
pass over a network?


Personally I'd use the database and I really wouldn't worry about the 
number of rows. Properly normalised hundreds millions of rows aren't a 
problem until you cant hold indexes in memory, them its time to shard. 
It looks like the logical shard point for http://www.smartmessages.net/ 
is customer.


I'd use bulk inserts where approprate in applications adding lots of 
events and I wouldn't store the text I didn't need. In log messages most 
of the text is the same. I'd store just the needed to generate the 
message and perhaps the templated text of the log message itsself.


I'd use mutiple 'events' tables and tie them together with temporary 
tables and SQL or if efficient enough a view. The first table is written 
to and probably innodb or something else with row level locking and 
foreign keys. The other tables would be 1:N MyISAM compressed read only 
tables (which do support indexes) covered by a merge table. The copy 
from the innodb to new MyISAM tables would be automated as would the 
modification of the merge table. At some point old events could be 
removed by removing the table from the merge table and simply dropping it.



Here;s a rough table stucture. The  indexes in events tables would be  
TargetId. But problably TargetId+EventDate probably eventId+event date 
as you found more uses/added paging.



User/Actor --- CurrentEvents Innodb--- EventType
  TargetIDEventIdEventTypeId
 Username   TargetId  LogMesssageText 
(optional)

 TargetEventType
 EventData 
 EventDate


   
user/actor--- ArchivedEventsNNN (MyISAM compressed) ---EventType

 EventId
 TargetId
 TargetEventType
 EventData
 EventDate

User/Actor --- MergedHistoricalEvents --- EventType

  ***Merge of ArchivedEventsN to M

For maximum speed or If you need to use the log messages in numerous 
languages add a tiny template parser to replace EventData into the 
logMessageText at high speed using a native function . lots of 
application level things can build the messages from string + data of 
course.


http://dev.mysql.com/doc/refman/5.1/en/adding-native-function.html

Just my brainstorm and untried so I'd appreciate other folks thoughts on 
the suggestion but it may be of some use.


Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best approach for DB-based event logging?

2009-07-03 Thread nigel wood

nigel wood wrote:


Here's a rough table stucture. The  indexes in events tables would be  
TargetId. But problably TargetId+EventDate probably eventId+event date 
as you found more uses/added paging.



Well that didn't format very well :-( The tables structures are:

User/Actor
===
TargetId
Username

EventType
==
EventTypeId
LogMesssageText (optional)

CurrentEvents,ArchivedEventsNN,MergedEvents
=
EventId
TargetId
EventTypeId
EventData
EventDate


HTH

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Problem with Dynamic table names.

2009-06-29 Thread nigel wood

Marco Bartz wrote:

I accidentally sent it before finishing...

I am looking for a way to do the following with a single query:
  

SELECT `ID`, `Name`, `Interface`,
(SELECT count(*) FROM CONCAT('listings_', `ID`) WHERE `Status`='Active') as
`activeListings`
FROM `sites`




I am querying the sites table and I want to have a field showing active
listings.

Each row in the sites table has a corresponding 'listings' table. so a site
where `ID`=1 will have a listings_1 table.

If I want to return data from 100 sites is there a way to get the active
listings count without making 100 extra querys?

  
At first glance your table design looks to be sub optimal, a table per 
site certainly isn't normalised particularly if the structure of each 
listings table is the same. Do you have this structure for performance 
or archiving reasons?


You certainly could do this in two queries if you used the results of 
the first to build a large write a left join or UNION query for the 
second but it maybe just be using clever SQL to make up for poor table 
design and not scalable in the long term. The queries would be:


select ID from sites;

SELECT `ID`, `Name`, `Interface`, count(*) as active
FROM `sites`
 begin one left join per ID 
left join on listings_[ID] where site.id = '[ID]' and listings_[ID].status = 
'Active'
 end one left join per ID 
group by sites.id;


Assuming you've a good reason for the table design perhaps a merge table 
is more what you need?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Totalling Counts done in Subqueries

2009-04-30 Thread nigel wood

Peter Brawley wrote:

 Is there a way to total counts done in subqueries?

Select expression aliases can't be referenced at the same level. You 
have to create another outer level ...



alternatively use variables:

mysql select @first := 1 as value1, @second := 2 as value2, 
@fir...@second as total;

+++---+
| value1 | value2 | total |
+++---+
|  1 |  2 | 3 |
+++---+
1 row in set (0.03 sec)


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: servers full potential / FT searches locking tables

2007-08-27 Thread nigel wood

Justin wrote:
Sometimes I get about 300 connections to the server, all are selects 
and all select and get the data returned but the connection doesn't go 
away and the website doesn't load up.. usually if there is a lock, the 
selects wait 2-3 secs and build up, but once unlocked the queries all 
are performed and then go away.. and the sites load fine.. 3 times 
today this has happened and I'm trying to figure out what could be the 
cause of it.  if I restart MySQL everything is good.


Anyone have any ideas or any ideas on how I can trace where the 
culprit would be..


it's a LAMP backend..


Are you classing insert...select or create tableselect as selects? 
They have very different locking effects.

Is the database in replication? if so is it a master or slave?
Do you have NFS mounts anywhere on the database server?
Are you using exclusively innodb tables? if not the source is probably 
table contention.
Get a couple of 'show full processlist', 'show innodb status'  query 
outputs during the lockups and run vmstat 1 -S M in another terminal. 
With the outputs from both you've something to work with.


Nigel Wood

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Dazed confused. Less is more?

2007-07-31 Thread nigel wood


A puzzler for you guys.. all plausible explanations (and suggestions 
for proving them) gratefully received.


We run several MySQL database servers in the traditional master-slave 
configuration and attempt (rather poorly) to spread select queries 
between them. Normally the slave gets 1/3 of the master load.  Both 
machines have identical configurations, hardware specifications and 
network connectivity. The main clients of these databases are PHP 
websites without persistent connections. A fail-over pair of machines in 
a separate building replicates from the master.


Today (as a result of replication failure) we directed all the traffic 
normally sent to the reporting server back to the master server adding a 
1/3 to its load. Several areas of the websites got FASTER afterwards and 
I'm currenlty at a loss to explain why.



Nigel Wood

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Deletes on big tables

2006-10-19 Thread nigel wood

Marvin Wright wrote:


I have 3 tables where I keep cache records, the structures are something
like
 

TableA is a 1 to many on TableB which is a 1 to many on TableC 


To give you an idea of size, TableA has 8,686,769 rows, TableB has
5,6322,236 rows and TableC has 1,089,635,551 rows.


My expiry runs on a daily basis and deletes approximately this number of
rows from each table.



 TableA500,000

 TableB  4,836,560

 TableC 71,534,549

 


My suggestions:

1) Add an (expirydate,primary key) composite index on table A and make 
sure your foreign keys are in place
2) Turn on cascading deletes for these three tables or (less optimally) 
use a single multi-table delete in a stored procedure to delete lots of 
rows (in A) per query  not singles
3) run the delete query with a limit of 1 rows or so in a 
transaction, use show innodb status to monitor how much rollback space 
the innodb engine has left and up the number rows if possible.
4) If your data integrity can take the risk  turn off innodb's flush on 
commit for this connection during the deletes.
5) Run optimise table once a week, if your deleting lots of records for 
a while the database is probably fragmented. If nessasary for 
availablity use a DR slave/master and run optimise table on the DR pair  
before promoting them to live.


Then look at the memory. If you cant upgrade the OS to use all that 
memory (we do on 32 bit servers) at least create a huge ram disk and 
tell MySQL to use it as temporary storage.


Hope that helps

Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: IP Address Function?

2006-08-26 Thread nigel wood

Chris Knipe wrote:


Hi,

I need to make *allot* of queries from a application dealing with IP 
Addresses.  At the moment, I'm storing IP addresses as a VARCHAR(16).  
I would *like* to store them as Integers by converting the IP to it's 
numerical equivalent.  I believe this would also save a enormous 
amount of table space.


The problem is, I *must* do the conversion as part of my SQL Query.  
Either during the SELECT/UPDATE/INSERT, or via a Procedure... And I 
must obviously also be able to convert the Integer back into a IP 
address during queries


Is this at all possible???
mytable


MySQL has dotted IP string to integer (and vice versa)  conversion 
functions.  You'd use them in queries like:


insert into  mytable set IPaddress = funcName('192.168.191.34');
select funcName(IPaddress) as strIPaddress, IPaddress as intIPaddress 
from mytable;


I suspect you posted to the list before attempting to the functions in 
the online documentation so I'll  simply confirm they exist and leave 
you to: Do Your Own Research by Reading The Fine Manual.


Nigel Wood


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Question

2006-08-14 Thread nigel wood

Michael DePhillips wrote:


Hi,

Does anyone have a clever way of returning; a requested value with  
one value  less than that value, and one value greater than that value 
with one query.


For example T1 contains

ID
1234
1235
1236
1238



Assuming the id's are consecutive.

You want surounding rows returned if the target Id is missing:
-
select id from T1 as first where T1.id  between  (1237 -1) and (1237 +1);

You want no rows returned if the target Id is missing:
-
select id from T1 as first , inner join T2 as second  on second.id  
between  (first.id -1) and (first.id +1)

where first.id = 1237;

If you Id's aren't always consecutive, or you need something more smart 
post again.



Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Question

2006-08-14 Thread nigel wood

Michael DePhillips wrote:


Hi Dan,

Thanks for the prompt reply,

As I described it yes, you are correct, however, the id may not always 
be one(1) value away. So the number one needs, somehow, to be replaced 
with a way to get the next largest value  and the previous less 
than value.


Sorry for the lack of precision in my prior post.



What you want to do isn't possible in the form you want without 
subqueries. Which would you prefer:


a) A single query returning one row of three columns containing nulls
b) A single query using sub queries (MySQL 4.1+ only) returning upto 
three rows with one column

c) multiple queries

Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replicating queries to testing server

2006-05-24 Thread nigel wood

Dan Trainor wrote:


Dan Trainor wrote:


Hi -

I would like to be able to replicate all queries from a live MySQL 
server, to a testing server at the office.


The reason for doing this is to test load under [semi]real-world 
conditions with the new server.



Hi -

So I was thinking about this more, and then it dawned on me.  This is 
simple MySQL replication.


Sorry for wasting the time.


No, it isn't. Selects aren't replicated nor is the timing true to catch 
contention problems.  I've done this recently and whilst I don't have 
time now I'll post my solution tomorrow. Assuming:


A) Your using *nux
B) Your application(s) and database are hosted on different machines
C) You have root access on one of the boxes
D) You can take a live db snapshot

It will let you record and playback your database server's load.

HTH

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie Locking Question

2006-04-26 Thread nigel wood

David T. Ashley wrote:


Nigel wrote:

 

mod_php will persist the MySQL connection holding open any lock or 
syncronisation token obtained through any of the three methods : 
begin/commit, lock/unlock tables or get_lock/release_lock.  PHP does 
ensure that even in the event of timeouts or fatal errors any shutdown 
handlers registered are still executed so it is possible to clean up 
properly whichever method is used. 
http://uk.php.net/manual/en/function.register-shutdown-function.php If 
you use php's pdo  with transactions it perform a rollback for you on 
abort or completion.
   



What is a pdo?

Thanks, Dave.
 


PHP's newest official way to talk to databases:
http://www.*php*.net/*pdo
*http://wiki.cc/*php*/PDO
http://www.phpro.org/tutorials/Introduction-to-PHP-PDO-(PHP-Data-Objects).html

Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie Locking Question

2006-04-22 Thread nigel wood

David T. Ashley wrote:


Hi,

I'm doing a PHP application, and there are just a few instances where I need
to do atomic operations on more than one table at a time and I can't express
what I want to do as a single SQL statement.

What I'm trying to guard against, naturally, is race conditions when more
than one process is making modifications at a time, i.e. multiple
simultaneous page hits.

LOCK TABLE widgets WRITE, thingamabobs WRITE ...

Make multiple interrelated table changes.

UNLOCK TABLES

--

In my estimation, what this should do is cause every other process to sleep
briefly until the first one is through to the UNLOCK TABLES.

I can't see going to a more complex locking model with such a simple
application.

Will the lock every table approach work as I intend?  

Yes, it will work as you suggest: It lock your database solid, and aim 
the programming rifle squarely at both feet in readiness for the fateful 
day which your client doesn't complete as quickly as you've estimated.



Anything I should
watch out for?
 

Your entire database freezing solid when the client fails to terminate. 
If your using persistent connections with a mod_php webserver and the 
script aborts without relasing the lock mod_php will kindly hold the 
mysql connection with the lock open for you.


If you can't or won't do this properly by using a transactional table 
and begin/commit at least look at using get_lock() based guard 
conditions which only lock a string leaving the database accessable. 
Whatever you do if you client is php install a shutdown handler to clean 
up any locks.


HTH

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Newbie Locking Question

2006-04-22 Thread nigel wood

David T. Ashley wrote:


Nigel wrote:

 


If you can't or won't do this properly by using a transactional table
and begin/commit at least look at using get_lock() based guard
conditions which only lock a string leaving the database accessable.
Whatever you do if you client is php install a shutdown handler to clean
up any locks.
   



Hi Nigel,

Just one question:  the documentation of get_lock() says that the lock is
freed when a process terminates.

Does this also occur with mod_php and persistent connections if a PHP script
dies (say, due to fatal error or CPU limit timeout)?

It isn't clear how mod_php works and why with persistent connections the
LOCK TABLES locks wouldn't be freed but the get_lock() lock would.

You were suggesting that I replace the LOCK/UNLOCK TABLES critical section
with one implemented using get_lock()???

Or maybe you are suggesting something else ...
 


Hi Dave,

mod_php will persist the MySQL connection holding open any lock or 
syncronisation token obtained through any of the three methods : 
begin/commit, lock/unlock tables or get_lock/release_lock.  PHP does 
ensure that even in the event of timeouts or fatal errors any shutdown 
handlers registered are still executed so it is possible to clean up 
properly whichever method is used. 
http://uk.php.net/manual/en/function.register-shutdown-function.php If 
you use php's pdo  with transactions it perform a rollback for you on 
abort or completion.


Database locking always comes down the safety vs concurrency. In a 
multi-user enviroment you want to lock as few rows as possible during 
your critical section. Innodb transactions are the only way to ensure 
correctness in a multi-statement update, but if you can't use 
transactions your goal is to find the smallest impact scheme which is 
still safe.


I've used get_lock() string locks in the past to simulate row level 
locks without transactions but it's only safe in certain update schemes, 
if all your developers use the scheme consistently and never safe if 
your application isn't  the only thing which modifies the rows of the 
tables/fields which need protection. Whether its a smart thing to do 
depends on your situation.


HTH

Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is there anyway to return an array?

2006-03-25 Thread nigel wood

David Godsey wrote:



I know, I know, sounds like something that should be done in the
presentation layer, howerver if possible, I would like to provide common
data presentation to multiple presentation layers (written in different
languages).

So is there anyway to return an array in mysql?



Your aware your doing something stupid and want to do it anyway :-(

Why not return the values from your user defined mysql function as a 
(properly quoted) ,comma seperated list. Since almost every application 
language now has a standard csv file handling library it should be easy 
to use across diverse display technologies.


Urrgh

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Merge tables.

2006-03-14 Thread nigel wood

[EMAIL PROTECTED] wrote:


Paul Halliday [EMAIL PROTECTED] wrote on 14/03/2006 12:09:10:


 


As an example:

There was a table called event.

This table is now broken up like this:

event _sensor_date.

So for every sensor, and every day, there is now a new table. So if I
have 20 sensors, every day I will have 20 new tables.

With this in mind, does this design make sense?

how will this scale?

Is there anything I can do through configuration (I doubt the
developer will change the design) to speed things up? or a workaround
that I could do on my end to compensate?
   



Could you explain how this is meant to improve scalability? Because to my 
mind it is probably the best way I can imagine to make the system 
unscaleable. To me, this design very much does *not* make sense.


You have bought, in MySQL, a highly tuned specialist engine for seqrching 
and sorting stuff in the most efficent manner. And then you have said that 
you will disable all its optimisation and force it into a linear search. 

 



I can think of a reason for doing this but not to extent described. Is 
your developer trying to create a situation where it's easy to archive 
of results earlier than a given day? So you store say 1000 days of data 
and can quickly archive the oldest day at midnight each day.


Assuming this is the case: There's no point splitting further than by 
day so tables per day/sensor don't make any sense unless your worried 
about sub second locking (i.e. doing it wrong).  You should make the 
unmerged tables as large as possible without the time to delete having 
an impact on your application. Having an impact depends on your 
applications tolerence to locking and the amount of data your adding and 
removing, you'll need to find it by testing. The table type you use will 
have a big impact on concurrent access locks. MyiSAM and Innodb are the 
two main candidates MyISAM is quick but is doesn't allow concurrent 
access to the table. Innodb will allow concurrent access but still locks 
rows and can lock the 'head point' during certain inserts.


The fact your storing sensor data worries me. How tolerent of 
lag/locking on insert or retreval is your application? If it's sensitive 
to more than a seconds lag  you need a careful review of your design. If 
it's hard real-time sack the developer then review the design.


Hope this helps

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Complexity Limit Question [Newbie Question]

2006-02-07 Thread nigel wood

David T. Ashley wrote:


Hi,

I have several tables linked in various ways so that an inner join is
possible.  However, at the same time and in the same SQL query, I'd also
like to query by some field values in one of the tables.

Two quick questions:

a)Will MySQL allow joins that involve more than two tables (in my case,
perhaps as many as 5)?

 

Yes. The join limit is 128 tables but if you hit it  you've just done it 
wrong.



b)Can limits on a key field be included in the join in the same SQL
statement as does the join, i.e. ... WHERE N3 AND N20 ... or something
like that.

 


Yes. Though not everyone considers it good practice. As an example:

SELECT p.name, c.name
FROM parent p
INNER JOIN child c ON c.parent_id=p.id AND c.age  18
WHERE  p.sex in ('male','unspecified');

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: items quantity

2006-02-02 Thread nigel wood

Gyurasits Zoltán wrote:


Hi ALL!

Please help

DATA:

header_iditem_idquant
1110
1220
21100
22200
3120
3215

header is the moving type, and items is the items table.
If header.type_ is 1 then incoming move, if 2 outgoing move.
I would like to calculate incoming and outgoing quantity of items. (stock)
 


select i.item_id,
sum(if(h.header_id=1,i.quant,0)) as Incoming,
sum(if(h.header_id=2,i.quant,0)) as Outgoing
sum(if(h.header_id is null or h.header_id not in (1,2),i.quant,0)) as 
OtherMovement

FROM header h
INNER JOIN items i on i.header_ID=h.id
GROUP BY i.item_id

Hope this helps
Nigel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Force max query time, or max records returned

2005-12-30 Thread nigel wood

Scott Baker wrote:

Is there a way to tell mysqld to stop running a query if it hasn't 
completed in over 60 seconds? Or how about stop running if the result 
is greater 100,000 rows or something? I have a bone head user who 
keeps querying two tables and NOT joining them, and then dragging the 
DB server down while it crunches this useless query.


How can I prevent this?

Scott


First some questions:
How is the user connecting: Using a mysql client, a web interface or a 
custom program?

Who controls the machine they are accessing it from?
Do they have a dedicated unique database user?
Does the user issue thousands scripted of querys per day or just a dozen 
manually?

Do their queries contain inserts or updates?
Who controls the database structure?
Does anyone fully understand the database structure?
Do you have a servers in a replication chain or could you set up a 
replication chain?
Can you influence their behavior without a technological fix? Does their 
incompetance merit other forms of action?

Just how much cash,  hassle and time is a solution worth to you?


The answers of course influence what is possible.  I'd say that if the 
user is an experienced programmer or  DBA you've simply hired the wrong 
guy. Assuming not and that  you need a technical solution it's likely to 
be one of these (in order of increasing effort):


Give the user a quick person-to-person refresher course in the use of 
entity models, explain and indexes


If they're connecting via the mysql client form a box you control look 
at the --I-am-a-dummy,   --select_limit= --join_limit= options and 
consider allowing the person only to be able to invoke the mysql client 
in that form. In *nix this is easy to achieve. Perhaps you should force 
them to switch to working this way until they can be trusted if they 
currently access using a different client.


If their queries are infrequent, read only, can stand a small delay and 
you've spare hardware consider setting up a replicant reporting server. 
Deny the user access to the master server and let them slow the 
reporting server to their hearts content.


If it's a web interface you control and your programming skills allow 
think about modying the code to apply a similar  set of restrictions to 
the users queries. If the feature doesn't already exist in the web 
interface software and the appication changes needed are beyond your ken 
a jobbing programmer could be employed to add it inexpensively.


As an expansion of the web interface idea if you understand the data 
realtionships it's possible for an  interface to cross-check and enforce 
the correct links helping novice users without restricting experienced 
ones.  I've implemented this myself on an MIS project and it works well.


If the user is accessing remotely with a client they control, they're 
changing data state,  you cannot influence their behavior and your 
pockets are deep.  The last resort is to force the users queries through 
a spoof  'mysql  server' which checks each query with explain before 
applying it. They have no access to the actual server only through the 
proxy. Whilst I've never done this but as old the MySQL manual document 
the protocol it's not impossible to make a validating proxy, just very, 
very expensive.


Hope part of this this helps

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: YAQQ (Yet Another Query Question)

2005-12-14 Thread nigel wood

Mark Phillips wrote:


Flights
+---+--+--+
| flight_id | data1_id | data2_id |
+---+--+--+
| 1 |1 |1 |
| 2 |1 |3 |
| 3 |1 |1 |
| 4 |2 |2 |
| 5 |2 |3 |
| 6 |1 |1 |
| 7 |1 |1 |
| 8 |4 |4 |
| 9 |1 |2 |
|10 |1 |2 |
|11 |1 |1 |
+---+--+--+

The data1_id and data2_id are indexes for the data recorded for that flight.

I want to summarize the data. One such summary is to count the number of 
different data1_id's and data2_id's. For example:


Flight Result Summary
index:  1   2   3   4
data1_id8   2   0   1
data2_id5   3   2   1

 


select
 sum(if(data1_id =1,1, 0)) as data1_id_1,  sum(if(data1_id =2, 1, 
0)) as data1_id_2, etc , etc
 sum(if(data2_id =1,1, 0)) as data2_id_1,  sum(if(data2_id =2, 1, 
0)) as data2_id_2 etc, etc

from flights

add composite indexes if required for speed.

Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: YAQQ (Yet Another Query Question)

2005-12-14 Thread nigel wood

Mark Phillips wrote:



2. Generally, what is the most efficient way to do this? Is is better to 
issue more queries that gather the calculated data or better to issue one 
query for the raw data and then do the calculations in Java? I am sure there 
are many factors that effect the answer to this question - server resources, 
code design, etc. However, I am interested in a best practices type of answer 
or general rule of thumb from the sage experts on the list. 

 


Sorry only just spotted the second half.

Processing in MySQL will be faster than pulling the dataset back and 
processing it. This is particularly true if the database server is 
remote from the servlet container. The chief reason is that processing 
it on the client add the time needed to copy the raw data over the 
network. In Java or C.*  data processing performance can be on a par 
with MySQL once the data is obtained, against an interpreted language 
such as PHP or Perl the database's performance  will always win hands 
down even if temporary tables are needed.


If the rocket data doesn't change rapidly the MySQL query cache will 
also improve preformance. This feature speeds things by remembering the 
answer to your query and replying with a  cached version until the 
rockets table is next updated.


Nigel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: linux timestamp

2005-09-27 Thread Nigel Wood

Dotan Cohen wrote:


Hi all, I have a field in a mysql database v4.0.18 that contains a
linux timestamp. I have been googleing for a solution that would
return to me all the entries where the timestamp falls on, say a
wednesday, or between 2pm to 3pm. I am led to believe that it is
possible, but I have found no examples. Something like:
SELECT * from listings WHERE timestamp(day==wednesday)
or
SELECT * from listings WHERE timestamp(14:00 = time = 15:00)

 



Fair warning: Because MySQL won't  be able to make proper use of it's 
indexes the following queries will be VERY slow with any reasonable 
sized data set. If your going to be performing these queries often I'd 
recommend either storing the field as a datatime (you can do date time 
to unixtime conversion in MySQL using the unix_timestamp() function) or 
denormalising the data and storing both.


SELECT * from listings WHERE date_format('%W',from_unixstamp(timestamp)) = 
'Wednesday';

SELECT * from listings WHERE cast( date_format('%H',from_unixstamp(timestamp)) 
as unsigned) between 14 and 15;



Of course, I don't expect these examples to work, I'm just trying to
illustrate what I'm trying to accomplish. Until now, I have been
pulling all the fields and checking the timestamp with php. But I
believe that there must be a better way. Thanks.

Dotan Cohen
http://lyricslist.com/lyrics/artist_albums/109/carlisle_belinda.php
Carlisle, Belinda Song Lyrics

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: I have a slow query that needs optimising..

2002-10-03 Thread nigel wood

On Thu, 03 Oct 2002, Andrew Braithwaite wrote:

Please can you post an explain of this query? As a first guess try:

alter table publicinfo add index location (x,y,class_code);

Nigel

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Optimize table

2002-10-02 Thread nigel wood


I have several database tables which regularly near the operating system 4GB
file limit. Until now I have been using mysqldump/'delete from
tablename'/'optimise table tablename' to backup a large chunk of the oldest
data and remove it from the table.  I have been doing this every couple of
months.

Approximately 172800 rows are added to each table daily. Each row contains a
enum field 'status' which hold the current state of the row.  All rows have
this enum field updated as they pass through various states before hitting the 
final state within 4 days of insertion. All the fields in these tables are of
fixed lengths. The database server is using MySQL version id 3.22.

Now that merge tables have had plenty time to become stable I am considering
upgrading the database server to 3.23 and would like to automate the archiving
process so that a script runs daily in the following manner:

1) All rows stay in the 'most recent' table while they are being updated
2) All rows which are older than 5 days are copied using 'insert select from'
to another 'final state' table. 
3) The copied rows are then deleted from the 'most recent' table.   

The two tables are bound together using merge tables, so that they appear as
one for reporting purposes. As the 'final state' table reaches its maximimum
size a new 'final state' table will be added.

My question relates to optimise table command. Will I need to run it
 on the 'most recent' table after each daily delete? The number of rows in
the first table will stay fairly constant so if I never run it will the
table/index space kept tracking the deleted rows locations eventually be reused
or will the table/index size grow constantly until one of the files hits the
operating system limit? 

Many Thanks,

Nigel Wood

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Limits of mySQL

2002-04-27 Thread nigel wood

On Sat, 27 Apr 2002, Sam Minnee wrote:
 I've been asked to put together a very large (well, it's large to me)
 database, and while mySQL is great for my current uses, I haven't had
 experience with stuff of this scale.
 
 The database will have about 88 tables, with up to 100 fields per table.
 There is a _lot_ of interlinking among the tables, and each transaction
 will have about 10k of data.  By the end of the first year, almost 500,000
 transactions will be in the database.  Unfortunately, I can't be more
 specific, as another party is designing the database specification, which I
 don't have a copy of yet.
 
 Now, if I were to use mySQL I would want to use the transactional version.
 I haven't had any experience with this, how does its performance and
 reliability compare (obviously the transactions are a + to its reliability).
 
 My question is: Will mySQL be able to handle this amount / complexity of
 data well, and how much better would, say, Oracle or even MS SQL Server 2000
 be?  What about PostgreSQL? PostgreSQLs relationships, constraints, views,
 and stored procedures would be beneficial, but not at the cost of of
 suitable performance.
 
 It would be much appreciated if someone with more experience developing
 databases of this scale could give me some advice on the pros and cons of
 each platform.

I'm not a suitable person to compare MySQL against other databases but MySQL 
will cope with this size of database if the tables are properly indexed and
your queries optomised. Here our main database has over 90 tables and several
of our tablse hold about the amount of data your'll acrue in year one (with many
more rows).  None of our tables have that many fields the most complex table has
50 fields but I don't think 100 rows will effect MySQL performance
significantly. Several of our tables have more that 10k per row and work just
fine.

It sounds as though your tables will be skirting close to the 4GB file
limit on most standard Unixes so be sure to enable large file support (or use a
raw partition for innodb). 

We're only using innodb on small projects so far but it seems to perform well
although we have had some issues with it's interaction with the PHP scripting
languages persitent connections.

 HTH

Nigel

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL security

2002-04-16 Thread nigel wood

On Tue, 16 Apr 2002, David Ayliffe wrote:
 Are MySQL really going to give you details of their past security
 'issues'?
 
 Think about it.  Try going underground and looking on some exploit
 sites.
 
 DA
 
 
 
 Hi,
 I'm working on security breaches in MySQL. Can someone guide me in this.
 To be precise can you tell me about some literature, some book which is
 easy to understand and also profound ! If anyone is working on that,
 I'll be more than happy to interact. Thanking you.

Why not? If a vunerability is discovered for which no known exploit exists
it's reasonable to withold that information (for a limited period) whilst a fix
is developed.  If the vunerability is showing up on exploit sites I'd expect
MySQL AB to be shouting it's existance (if not howto details) from the roof
tops so the users can take counter measures whether a fix exists or not. 

Nigel

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Index only sometimes used

2002-03-20 Thread nigel wood

On Wed, 20 Mar 2002, Steve Gardner wrote:
 Hi All
 
 Could someone explain to me why an index in a select I am doing is only
 sometimes used.

 mysql explain select * from mailstat where domain_id in(2);
 +--+--+---+--+-+--+-+---
 -+
 | table| type | possible_keys | key  | key_len | ref  | rows| Extra
 |
 +--+--+---+--+-+--+-+---
 -+
 | mailstat | ALL  | ind4  | NULL |NULL | NULL | 2955666 | where
 used |
 +--+--+---+--+-+--+-+---
 -+
 1 row in set (0.00 sec)
 *notice ind4 is NOT used
snip 
 Basicaly, when I use '2' in the in() statement (and one or two other values
 from hundreds that do work), the index is not used.
snip
 If anyone could shed some light on this I would b most greatful :)
 
 Cheers, Steve

Check the manual section on the query optimiser,.I seem to remember that if
MySQL has to check more that some percentage of a table (30% ?) the
optimiser guesses it can search the entire table just as quickly. 
Someone please correct my if I'm wrong.

Try 'select domain_id, count(*) from mailstat group by domain_id'  to check the
distribution of your data.

Hope this helps,

Nigel

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: query

2002-03-19 Thread nigel wood

On Tue, 19 Mar 2002, Horacio Lam wrote:
 The folowing is a query I tried to do, to view reports that were 
 between two dates, the problem is that the reports that are made 
 on the second date, are not being display, if anyone can help me, please reply.
 thanks
 
 
 select ticket.t_id, t_summary, t_category, e_status, t_department, 
 e_assignedto, t_priority, t_timestamp_openend from ticket, events, tmpeid
 where ticket.t_id = events.t_id and 
 events.e_id = tmpeid.e_id and t_timestamp_opened 

http://www.mysql.com/manual.php   (the manual)

Check the manual, but I think '-mm-dd' means '-mm-dd 00:00:00' change
the final clause of the between to be '-mm-dd 23:59:59'.

hope this helps,

Nigel 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Downsides of MySQL?

2001-08-16 Thread nigel wood

 The mySQL
 security model is also not sufficiently developed for any system that
 involves money. 

FUD. My company stores everything including encripted credit card details 
authorisation codes in a  MyISAM database. while some careful work is required
to get table locking right  your application must do some policing for
constrants but is very possible. 

I'd be happy to recomend the MySQL support to anyone, one email to support gets
you in touch with a lead developer.  Our resolution times have been between  3
hours and 1 business day. Anyone care to comment on their average
time-to-resolution from a non-GPL db company?

-- 
| Nigel Wood
| PlusNet Technologies Ltd. 
+ -- Internet Access Solutions @ http://www.plus.net - 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Whoops, 4GB limit

2001-06-29 Thread nigel wood

I have a table in a MySQL database on a Solaris system which contains log
entries and have stupidly overflowed Solaris' 4GB file
limit.  The table is unusable. isamchk reports:
   error: 'log.ISD' is not a ISAM-table

I have tried making a truncated copy of the file and isamchk'ing the
shorted file. The message is still the same. Based on the amount of time 
isamchk takes to determine the file is not it's domain ( 1/10sec) 
the problem must be something simple like a missing header or start of record
mark.

Any ideas? Is the ISAM format documented anywhere? 

Nigel

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Provider claims 'it's normal that mysql crashes', is that true?

2001-04-04 Thread nigel wood

On Wed, 04 Apr 2001, Gunnar von Boehn wrote:
 Hello
 
 
 My provider 11-Puretec (www.puretec.de)
 hosting more than 1.000.000 domains
 runs about 14 Databaseserver with MySQL 3.22.32-log
 on Linux dual Penti-III 500Mhz machines.
 
 In the last 6 month the average uptime of the mysql-servers was around 8 hours.
 
 As I asked them why their mysql-server die so often, I got their
 standard problem email-answer that claims "mysql is scaling rather badly".
 
 After further asking I got a personal answer that says:
 "it's normal the mysql-servers die because of heavy load".
 "We can't help that the mysql task keeps crashing if to many users access it.
 That's totally normal for a mysql database"
 
 
 Is that true?

Only if their servers/queries are set up wrong. We use Sparc/Solaris exclusively
in our production network now but used to have more data than this hosted on a
similar machine. They could be telling the truth is they have huge queries that
lock the tables for extended periods. The table level locking of .ism tables can
cause problems on very busy servers. MySQL's newer tables avoid this problem
but I'm not sure how production ready they are.

Nigel  

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Reg. Synchronization between MySQL DBs

2001-03-28 Thread nigel wood

On Wed, 28 Mar 2001, Aigars Grins wrote:
 Hi,
 
 The short version:
 
 Does anyone know an automated way of keeping two MySQL db's (located on
 different machines) in sync? A small window of non-sync could be acceptable.
 

 The long version:
snip

The important question is how much availability ( the ability to survive
network partitions) your looking for: 

1) Is simple master -- slave replication good enough for your application ? 
all inserts  updates going onto the Master the slave being read only at all
times ?

   MySQL has in built support for this (Gamma).

2) Are the queries your performing on the data you capturing 'transactional'
i.e. dependent on the presence/state of the rest of the data set all the time?  
   
If not MySQL + some scripting can be made to re-sync two 'peer' databases
on a regular basis providing you design your insert  update queries carefully
particually incrementing keys. This can be done with the plain text update log
or with timestamped fields. Several people who frequent the list have made
similar applications to this. Here we run two copies of a critical database in
this manner both operating read/write  suviving each other in the event of
failure.

Both of these are documented in the manual, if your application needs fully
distributed transactions, MySQL is not the system your looking for.

Nigel



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php