Re: 97% CPU MySQL freeze Problem

2002-01-31 Thread Russell King

> since some days we have a critical problem:
> 1 or 2 times a day the mysql CPU usage grows up to 97%
> (normally 20-40%) and mysql-daemon freezes.

> mysql : 3.22.22
> os : FreeBSD 3.2
> system : intel dual pentium
> ram : 512 MB

This is a known problem with the threading library on FreeBSD. A suggested
fix is to compile with the Linux threads, although I haven't tried this
myself.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysqld & freebsd

2002-01-23 Thread Russell King

> From: Oleg Prokopyev <[EMAIL PROTECTED]>
> 
> > freebsd 4.4 stable mysql3.23.47

[deletia]

BTW - I tried to send the threads explanation to 
your account, but I got a bounce-back with "spam host
denied" message... (!?)

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysqld & freebsd

2002-01-23 Thread Russell King

From: Oleg Prokopyev <[EMAIL PROTECTED]>

> freebsd 4.4 stable mysql3.23.47
> 
> when i start it - it works -
> but after 10-15 minutes cpu usage became about 100%
> and mysqld do not responds to any query

Yes. It's caused by a problem in the FreeBSD threads 
library, and this problem has been around for some time.

FreeBSD 4.2RELEASE worked ok, IIRC, but basically you
cannot rely on FreeBSD's userland threads to work ok with
MySQL. There's too many people who've had problems
with make kernel/make world. They upgrade a version, and 
'pffft' MySQL stops working properly.

You can check out the mailing list archives on this problem
on deja : newsgroup *mysql*
keywords : freebsd threads cpu

Sorry I don't have happier news. It does work fine on
Linux though, and if you must use BSD try OpenBSD or
NetBSD which use a different thread library.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: need suggestions???

2002-01-21 Thread Russell King

> I need to save my database (comma separated)in file which i can do by
> mysql..admin commandnow what i want to do is that this file gets
> auto-emailed once a week to the concerned personthe site that uses
mysql
> is hosted on a linux machine.

> I can add a process that does the above in the cron of the linux
environment
> But i cant do that sinc the ISP that is hosting the site won't
allowcan
> anyone give me any clues?

If you have shell access you could write a bash script with an infinite
loop, that uses the 'sleep' command to wait 1 day between loops.

something like :

while (1)
mysqldump --user=root mydatabase | mail [EMAIL PROTECTED]
sleep 1d
end

Russ


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Porting from MS SQL to MySQL

2002-01-21 Thread Russell King

> We've requested a database from different companies, and specifically
> said we wanted MySQL or PostgreSQL because of the open source angle
> and we're a library.
> One company offered MS SQL as the platform and said that they can later on
> port it to MySQL. For this they wanted 18 000 euro. Now, what I want to
know
> is, how easy is it to port a (fairly complicated) database from MS SQL to
> MySQL? It can't be work worth 18 000 euro, now can it?

This sounds like a reasonable price for the work.

For this work they will have to :

1. Specify the changes and get signoff from you, the customer.
2. Check the capabilities of MySQL compared to their current databases.
3. Define the scope of the work (what areas it's likely to affect in their
system).
3. Write a project plan and get signoff from you.
4. Do the development work.
5. Test internally.
6. Acceptance test with you, and get acceptance signoff.

Don't forget, they will need to check every query in their system, as MySQL
doesn't support (among others) subselects, triggers, and stored procedures.
This is in no way a trivial task.

A project like this is likely to take 6 weeks to complete, providing the
consultancy work is not held up by the client. Typically you would expect to
charge triple the labour costs to cover rent of building, desks, bills, PCs,
advertising, a small profit, and a little extra to cover costs should
slippage occur.

Labour @ E5000 / month. = E15000 for 1 months work. or around E22,000 for 6
weeks (although it's unlikely to be 6 weeks solid, so E18,000 looks pretty
reasonable).

The actual development time is a very small component of the total cost. The
consutlancy and testing are the main costs here.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Help with Database structure..

2002-01-19 Thread Russell King

> We thought about having one massive file for all this data, BUT, we
archive
> something like 100,000 articles a day.  We store articles for 6 months, so
> this table would very quickly have 18,000,000 rows.  Now, we need to
select
> a group of articles based on source and date.  We also need to select
based
> on url.

Shouldn't be a problem. You can put the whole thing in one table with
indexes on the various columns you'll be selecting on.

In my experience it's best to tokenise the source, date and url information
in a seperate table, thus storing the filename, and three integers.

1st integer - Days since 1st Jan 1980 (for example). If you don't need to
store the full datetime field, then don't bother, just store a bigint.
2nd integer - Contains a reference to the source. You can store the actual
source name in another table, with this reference number against it.
(indexed of course).
3rd integer - Same as #2, but contains a reference number for the url. The
url being held elsewhere.

This will keep the indexes you lookup on nice and small as they store 3
integer values, (make sure you use the correct type of integer, BIGINT
probably).

If you always lookup on all three values, an index on all three is best.

To lookup articles based on url, you check the url_table for the correct
reference number. Then run a query on the big_table looking for that
reference number.

If you have duplicate urls, only store them once in the url_table, as it's
just wasteful otherwise. This also makes for a faster url lookup.

> I can only imagine how long it would take to search 18,000,000 rows for a
> specific url.

Checking the larger table with 18M rows would typically come back in well
under a second on a 200Mhz PC, although you need around 80M index cache. If
all the urls are unique, this table will become pretty large, and you'll
need a fair sized index on that too. I would imagine 300M extra would do the
trick, if you only index the first 14 characters or so.

These figures scale up pretty well, until you run out of memory, when it all
gets a bit sluggish. If that's the case, you can store the different tables
on seperate PCs to speed up retrieval. After all -- you aren't doing any
JOINs.

Where it gets interesting is when you get a request to search all of these
documents a la dejanews. It's actually not too difficult to build a very
fast word search algorithm, but you'll start to deal with proper BIG tables,
with billions of rows...

You also need to factor in how reliable you want the system to be, and what
your acceptable down-time is BTW - Loading tables over 1M records can be a
pain in the arse when you need to recover, and there's no easy way to make
it quicker.

Russ.




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: HUGE Table Help

2001-11-21 Thread Russell King

From: "Kevin Smith" <[EMAIL PROTECTED]>

> I'm "considering" making a huge table, by that I mean approx. 26 millions
> records which will be static data, no updates, or deletes, etc. will be
> performed on this table, only select statements.
...
> At a guess the raw data may be in the region of 9GB.
>
> My question is, if I was to do a search on the Post Code (never on any
> address fields) what sort of performance can I expect from a Dual Zeon
> 933Mhz CPUs  running Windows 2000 Pro Server with 512MB RAM?
>
> This is based on the fact the the post code field will be indexed and
> perhaps an additional index that indexes the first two characters of the
> post code and therefore narrows down the search to specific records to
begin
> with, ie. OX16 0TH, index this and then create an index with the
characters
> OX...

I'm curious as to why you want to make more than 1 index.

Your main bottleneck is (IMO) your RAM requirements. A single index on
this is going to be around 200-300M. With additional caching and 2000's
memory requirements you're squeezed for space.

It'd be easier to advise if we knew what sort of queries are going to be
run, and how many concurrant users, etc.

Regards,

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




EXPLAIN problem

2001-09-23 Thread Russell King

Hi all,

I'm having a problem where the optimiser seems to take a *very* long time to
work out which indexes to use.

Although I'm using a lot of joins, it takes around 10 minutes to give an
EXPLAIN of the query shown below. The actual query takes only a second or
two (or so it appears).

linklist is a table which contains 5.5 million rows. Position only contains
95,000.

Anyone have any ideas ? The query is as simple as it can be, and speed is
very important, as it returns immediate data to users. Apologies for the
size of the SQL, I can't see any way to reduce it.

select
DISTINCT position.id as total
from
position,
linklist ,
linklist as linklist2,
linklist as linklist3,
linklist as linklist4,
linklist as linklist5,
linklist as linklist6,
linklist as linklist7,
linklist as linklist8,
linklist as linklist9,
linklist as linklist10,
linklist as linklist11,
linklist as linklist12,
linklist as linklist13
where
position.id = linklist.position_id
and linklist.position_id = linklist2.position_id
and linklist.position_id = linklist3.position_id
and linklist.position_id = linklist4.position_id
and linklist.position_id = linklist5.position_id
and linklist.position_id = linklist6.position_id
and linklist.position_id = linklist7.position_id
and linklist.position_id = linklist8.position_id
and linklist.position_id = linklist9.position_id
and linklist.position_id = linklist10.position_id
and linklist.position_id = linklist11.position_id
and linklist.position_id = linklist12.position_id
and linklist.position_id = linklist13.position_id
and (linklist.id = '705' and linklist2.id = '125'
and linklist3.id = '1244' and linklist4.id = '4246'
and linklist5.id = '803' and linklist6.id = '1247'
and linklist7.id = '1904' and linklist8.id = '3509'
and linklist9.id = '69' and linklist10.id = '952'
and linklist11.id = '5' and linklist12.id = '3071' )
and (position.dater >= 968)
and (linklist13.id = '4' )

Many thanks,

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: threads & mysql

2001-06-14 Thread Russell King

From: Dvoøáèek Michal <[EMAIL PROTECTED]>
> Hi there,
>
> little problem. I have program (like phorums) and there is command
> mark all posts as readed. On one board is about 4000 posts, and if is
> new user and want mark all posts as readed - it's problem (big).
>
> All is written in ANSI C (using C API functions). And here is main
> problem: when i execute "marking" thread (most of mysql code run as
> separately threads) and someone wants list this board (list thread)
> nothing happens. After about 15 seconds program show results. And if
> you type show command for two times program crashed. I'm not using
> locking tables. And question: is possible that SELECT will wait on
> previously or currently running INSERT and how is it multithread
> programs, where some threads are using same table (to avoid error like
> Out of synchronization, Query cannot be executed now, )
>
> Thanx

Hi.
Yes, MySQL will wait for a large insert to finish before running a select.
At least that's my experience.

You can get around this by either :

a) Adding granularity into your insert thread. Only insert 10 at a time for
example.
b) Use a table type that has row or page level locking.

Personally I'd use the later, but if you cannot do so for any reason, you'll
have to do the former.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innodb text fields

2001-06-14 Thread Russell King

From: ryc <[EMAIL PROTECTED]>

> I am aware that currently innobase tables can not have abritararly large
> text/varchar fields and support for this will be released sometime in the
> month of June. The application I am working with requires text fields
> (usually less than 64k), and I desperatly need row level locking. Either I
> continue using Mysql, and use Innodb tables.. or I have to switch to
> Postgresql.
>
> I need to do something right away about this performance problem, I can
not
> wait until Innodb tables are released with the unlimited row length. I
tried
> to find out what the max row length for a Innodb table row, but did not
find
> it in the documentation. If it isnt too much smaller than 64k I could
switch
> now, and not have to worry about using postgresql. Does anyone know what
the
> limit is? Any pointers would be apreciated.

Hi,

We used to have a similar problem with MS-SQL Server (a limit that's now
been removed I believe).

If you are willing to do some extra development you can get around this by
splitting the field into multiple chunks of (for example), 8k, and inserting
them with a field indicating re-assemble order.

Benefits : Better use of space with fixed field size. Can store an object of
abitrary size (i.e. from 1k to gigabytes).

Downside : Extra development. Slower retrival.

So your code would look something like :

mysql_query(db, "select chunk_data from chunks where id = 5453 order by
reassemble_order");
results = mysql_get_results(db);
number = mysql_num_rows(results);
for (a=0 to number)
{
tail = mysql_result(results[a]);
bigvar = bigvar+ tail;
}

Splitting it up in the first place is a similar procedure.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: help exporting to spreadsheet

2001-06-14 Thread Russell King

> On 14-Jun-01 Terminal Velocity wrote:
> > I would like to use MySQL to manage my company data, but my clients want
> > all their information sent to them in an Excel spreadsheet. Presently I
> > use Access, and export a SELECT statement into Excel, and then make a
> > couple changes to the Excel file (I just high-lite the updated data).
> >
> > I want to change the database to MySQL on a Linux box, but the biggest
> > issue is being able to export into Excel. I know I can export to a
> > tab-delimited text file, but I would prefer to do it all in one step.
> >
> > Any ideas?

Do they want the information sent to them, or accessible via the internet.

In either case a CSV file is normally sufficient. If you want access via the
internet it's pretty simply to produce a page that displays an excel file
(actually it's CSV) in Excel within their browser which looks quite
impressive. Email me for details and I'll post the method up here. It
invloves using PHP and Apache.

Alternatively if you want to dump to a CSV file any language will do.
Personally I'd use C or PHP.

Highlighting and graphing data, you're on your own, although you *may* be
able to incorporate a formula to do this in the CSV IIRC.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: problem using mysql_fetch_row in C API

2001-06-14 Thread Russell King

From: Don Read <[EMAIL PROTECTED]>
> On 14-Jun-01 Kenneth Johansen wrote:
> > hi all, ive got a small problem (again).
> > i have a table with an entry "foo" which is an int(3).
> >
> > my problem is that whenever i collect the values in my c program.
> > they are returned as characters with mysql_fetch_row. (which is ok)
> > but i cant seem to convert them into int,
> > using something like
> >
> > temp = (int) row[0];
> >
> > this might not (strictly) be a mysql question.
> > but IMHO i dont think a C forum is the right place either.
> >
>
> No it's not; & yes, it would.

I disagree. This is the correct forum, as many MySQL users
have this problem.

To elaborate; MySQL returns *all* it's results as a text string, or more
specifically an array of text strings. This is actually a database standard
IIRC.

Just put them through either atoi (for integers), atol (for longs) or atoll
(for long longs).

It depends on the compiler and libraries, but these will generally work.

So what you need is :

temp = atoi(row[0]);

Which should work fine.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: How to create table in this case

2001-02-09 Thread Russell King

From: Suresh Kumar R <[EMAIL PROTECTED]>

> I want to have the absentee list of students per hour_of_day
per day for a week, done for the entire course.
>
> One possibility I thought was to create a table with date and
each hour of the day as columns, then each cell would have to
have variable number of student roll nos who are absent for that
particular hour and I dont know how I can enter an array of
numnbers.

I think the best method to deal with this is just have a single
table containing :

student_idint
hour_of_dayint
day_of_weekint
week_of_courseint

One assumes a student is present for all classes.

If a student is absent, an entry is made in the table, recording
the student's number, and the time/day/week they are absent.

It is more elegant to record the date using a date field, but it
will complicate your subsequent select queries, so I would use
the above design in your case.

This has the added advantage of only storing data by exception.

Russ.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: PHP on Cobalt RaQ - general info

2001-01-25 Thread Russell King

> >Just a question of curiousity.  Can you install PHP using tars files?
> >I was looking at purchasing a few servers last fall and when I looked at
the
> >RAQ3/4 they (colbalt) told me that you could only install their stuff and
> >they didn't have new versions of MySQL and only PHP3.

> The only downside is it's not a standard Apache installation - so you
can't
> just rebuild Apache.
>
> But yes you CAN use MySQL and PHP just fine, from source.
> (I'll include the PHP installation instructions for a RaQ3, below.)
>
>
>
> >Would you recommend these to meet our needs?  (We need a dedicated MySQL
> >server, 2 apache servers (that load balance/high availablity of which
will
> >also run a nameserver on each one), one dedicated mail server (probably
> >Qmail from what has been recommended so far).

I've successfully compiled and installed PHP4/MySQL on some Cobalt 3i boxes.
It's
not too bad.

The cobalt is a nice 'appliance' box, and is excellent for hosting multiple
domains.
It's low maintenance as well.

However, given the lack of an affordable upgrade path I find it difficult to
recommend
for a database application that may expand.

It depends on your level of experience. Setting up and maintaining Cobalts
is a breeze
but you can find problems later on if you need to expand or upgrade.

Russ.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php