Re: Possible to delay index writes until server is less busy?

2005-06-30 Thread gunmuse

Write to a memory table first then do a hotcopy on a scheduled basis.


- Original Message - 
From: Mathias [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Thursday, June 30, 2005 9:10 AM
Subject: Possible to delay index writes until server is less busy?


We've been benchmarking a database that in real-life will have a huge 
write load (max peak load 1 inserts/second) to the same table 
(MyISAM).


We will need about 4 indexes for that table. However, from our benchmark 
tests, it is clear that writing indexes takes too many resources and 
impedes the speed of inserting new records.


To overcome this, we are thinking of:
1 -  using several smaller tables (instead of one big one) by creating and 
writing to a new table every x hours,
2 -  wait with writing the indexes until a new table has been created 
where the next inserts will be (i.e, not write indexes until the table has 
been closed)


The biggest problem now is if the indexes are created when the server is 
very busy. If there was a way of telling MySQL to delay creating the 
indexes when it is busy, then a big obstacle would be out of the way.


Is this possible? We could not find anything in the MySQL documentation 
concerning this.


Any suggestions would be greatly appreciated.

Kind regards,

Mathias


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: What is best open-source platform alternative to overcome innodb 2Gb memory limit on Linux? FreeBSD?

2005-06-15 Thread gunmuse
I agree The New AMD's (Can't say just opertron) but the 246 and 248 CPU's
are moving data between cpu and Ram at 6 gig per second versus 2gig for the
Xeon's peak right now.

The New opterons communicate with ram better than any other CPU on the
market and with the right MySql setup that is a huge benefit.

My News site platform is going to be moving from Xeon to AMD for that very
reason.  Our software is written to avoid harddrive calls at all cost to
keep our page load super fast.

I would add to his suggestion a RAID 0-1 setup that would double your drive
output speed.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.prnewsnow.com  Free content for your website
469 228 2183


-Original Message-
From: David Griffiths [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 15, 2005 11:17 AM
To: Brady Brown
Cc: mysql@lists.mysql.com
Subject: Re: What is best open-source platform alternative to overcome
innodb 2Gb memory limit on Linux? FreeBSD?


Why not go AMD-64? Dual Opteron, with 8/16/32 gig of RAM? Get a 3ware
SATA drive, and run Gentoo for AMD-64. You can increase your innodb
buffer pool to use almost all that space. You can make your buffer pool
as large as the physical RAM in your machine can support. No 2.5 gig per
process, 4-gig limit on addressable memory (without the address-extensions).

Your hardware is holding you back more than your operating system.

David





Brady Brown wrote:

 Hi,

 I am currently running a large database (around 20Gb) on a 32bit x86
 Linux platform. Many of my larger data-crunching queries are
 disk-bound due to the limitation described in the innodb configuration
 documentation:

 *Warning:* On 32-bit GNU/Linux x86, you must be careful not to set
 memory usage too high. |glibc| may allow the process heap to grow over
 thread stacks, which crashes your server. It is a risk if the value of
 the following expression is close to or exceeds 2GB:

 Being a responsible citizen, I have my innodb_buffer_pool_size set
 below 2Gb.  But the time has come to scale the application, so I need
 an alternative solution that will allow me to set
 innodb_buffer_pool_size as high as my heart desires (or at least well
 beyond 2Gb).

 Do any of you have battle-tested recommendations?
 How about FreeBSD?  From what I can gather, it is a good idea to build
 MySQL on FreeBSD linked with the Linux Thread Library. Would doing so
 re- invoke the 2Gb limit?

 I look foward to your collective responses. Thanks!

 Brady



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB

2005-05-09 Thread gunmuse
Why not Raid 3 and take advantage of disk write and read performance.

Raid 3 isn't commonly used because it has CPU overhead.  But at the same
time Apache causes CPU overhead waiting for the data from the drives.

I am buying this exact same server With 32 Gig of ram.

Frankly the slowest thing in my current Raid 5 server is still waiting for
the disks to read.  That is what prompted me to think bigger processors,
more ram and faster motherboard to compensate for using a Raid 3 to overcome
the slowest hardware in my server.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Jeremiah Gowdy [mailto:[EMAIL PROTECTED]
Sent: Monday, May 09, 2005 6:37 PM
To: Dathan Pattishall; Richard Dale; mysql@lists.mysql.com
Subject: Re: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB


I use Redhat Advanced Server v4 (2.6 kernel) on my four dual opteron
systems.  I've had no real performance issues with the I/O scheduler, but
that's because I run 8GB of ram with a huge key cache.  I recommend taking
the box to 8GB of ram, it's worth it.  Definately use RAID 10.

- Original Message -
From: Dathan Pattishall [EMAIL PROTECTED]
To: Richard Dale [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Monday, May 09, 2005 4:15 PM
Subject: RE: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB




 -Original Message-
 From: Richard Dale [mailto:[EMAIL PROTECTED]
 Sent: Sunday, May 08, 2005 9:37 PM
 To: mysql@lists.mysql.com
 Subject: Dual Opteron, linux kernels, 64 bit, mysql 4.1, InnoDB

 A new server is about to arrive here and will have have 8x15K
 RPM spindles, dual Opteron processors and 4GB of RAM, and
 will have around 100GB of database (primarily stock market
 prices) - the SCSI controller will also have battery-backed
 RAM too.  InnoDB will be used exclusively.

 I've searched the list and seen varying reports of which
 Linux kernels work best etc.

 I'd be intersted to know the following:
 a) Which 64 bit Linux distributions are good for the task?

   Suse 8.1 2.4.21-215-smp #1 SMP Tue Apr 27 16:05:19 UTC 2004
x86_64 unknown

 b) Which 64 bit Linux distributions are bad for the task?

2.6 the IO sceduler is still messed up.
  RedHat AS / Suse 9.x are messed up as well

 c) Any comments on kernels, particularly with regard to 64
 bit support and schedulers?  Any problems with the latest
 kernels (2.6.11  2.6.12-rcX)?

 d) Any recommendations for RAID volume setup

Use RAID-10 split the disks evenly across each channel


 e) Any MySQL optimisations for multiple spindles, onboard
 caching, stripe sizes, RAID5 vs RAID10.

Don't use RAID5, use Reiser FS if your using SUSE

 f) Any MySQL reliability settings to take into account the
 battery-backed RAM on the RAID controller?

 I'm happy to collate the responses into a summary too.

 I'm aware of the following discussions which describes a
 reasonably grunty Dual AMD system with a similar
 configuration to mine:
 http://meta.wikimedia.org/wiki/Hardware_ordered_April_2005
 http://meta.wikimedia.org/wiki/Hardware_order_May_2004

 Best regards,
 Richard Dale.
 Norgate Investor Services
 - Premium quality Stock, Futures and Foreign Exchange Data for
   markets in Australia, Asia, Canada, Europe, UK  USA -
 www.premiumdata.net



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Can I use LIKE this way?

2005-05-01 Thread gunmuse



{ 
$dead_beat_ads = "Delete from fsearch_temp where fsearchKeyword='$str' and 
fsearchTime='$time' and fsearchIp='$fbip' and fsearchHost LIKE '%$dbad%' 
"; $dead_beat_result = 
$dbi-query($dead_beat_ads);}

What I am doing is 
eliminate "Run of site" advertisers from our network LIKE EBAY.com I 
would rather give free crawler results than to allow this spam advertising 
anymore. So I before I develope our XML output of search results to our 
partners I am going to remove them but since their tracking can change All I 
want to do is search for them by host Like ebay.com and I will pull 
this out in real time thus all the $time requirements. This will mean that 
no matter which of our partners ebay buys advertising from they won't be 
displayed through our network. Or if someone buys a commission junciton ad 
we will pull them out by the host even though the click url may be a qcksrv 
ad.

So the question is 
can that % be butted up against the variable or should I put a space in 
there. While reading the LIKE on Mysql .com it talks about 2nd 
position with a space but I didn't know if that meant for the use of 
%_$dbad_% or if % $dbad % would have it 
looking for second position stuff.


ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: Can I use LIKE this way?

2005-05-01 Thread gunmuse
Yeah, the problem is we are trying to do something that is not done by any
of the other search engines.  I am working on a Quick Clean to clean out
Search spammers on the fly.  We have Upstream partners (Like Overture) who
like to Fill us with their Run of Site ads when even they don't have
paid advertisers.  While this is good for the bottom line we find its
annoying as hell to those who use our search.

Its our hope that folks will use our feed for their search because of our
work to keep it clean.  There are tons of sites out there producing feeds.

Because I couldn't do it the way I wanted to I found a new way by cleaning
the URL's from each of our partners before we add them to ram for our output
sorting.  By cleaning the urls I was able to delete by using:

 {
  //print $dbad;
  //print br /;
$dead_beat_ads = Delete from fsearch_temp where fsearchHost='$dbad';
$dead_beat_result = $dbi-query($dead_beat_ads);
 }

This will give a greater degree of certainty to who we are deleting on the
fly.   I have already tested this and we are actually running it live on our
Outdoor network and will implement this on http://www.sharedfeed.com for it
opening day release next week.

By avoiding talking to the hard drive cleaning up content on the fly is a
snap.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183

  -Original Message-
  From: Rhino [mailto:[EMAIL PROTECTED]
  Sent: Sunday, May 01, 2005 4:44 PM
  To: [EMAIL PROTECTED]; Mysql
  Subject: Re: Can I use LIKE this way?


  I think you want the wildcard characters, % and _, to be within the
variable, not within the variable name. For example, if you are looking for
all advertisers whose name begins with 'A' followed by zero or more unknown
characters, you would set your variable equal to this pattern [I don't know
PHP so if this isn't how a variable declaration would look in PHP, adjust it
so it does]:

  myVar = 'A%'

  Then, you would execute your query, plugging in the variable in the
appropriate place [again, I don't know PHP so maybe you need a $myVar
instead of :myVar or at least something along those lines]:

  select *
  from  mytable
  where advertiser = :myVar

  In your question, you seem to want to put the wildcard character(s) in the
variable name like this:

  select *
  from mytable
  where advertiser = % :myVar

  While this might conceivably work in some programming languages, it's not
the way it is normally done in any language I know.

  Rhino

- Original Message -
From: [EMAIL PROTECTED]
To: Mysql
Sent: Sunday, May 01, 2005 4:27 PM
Subject: Can I use LIKE this way?


  {
 $dead_beat_ads = Delete from fsearch_temp where
fsearchKeyword='$str' and fsearchTime='$time' and fsearchIp='$fbip' and
fsearchHost LIKE '%$dbad%' ;
  $dead_beat_result = $dbi-query($dead_beat_ads);
  }

What I am doing is eliminate Run of site advertisers from our network
LIKE EBAY.com  I would rather give free crawler results than to allow this
spam advertising anymore.  So I before I develope our XML output of search
results to our partners I am going to remove them but since their tracking
can change All I want to do is search for them by host  Like ebay.com  and I
will pull this out in real time thus all the $time requirements.  This will
mean that no matter which of our partners ebay buys advertising from they
won't be displayed through our network.  Or if someone buys a commission
junciton ad we will pull them out by the host even though the click url may
be a qcksrv ad.

So the question is can that % be butted up against the variable or
should I put a space in there.   While reading the LIKE on Mysql .com it
talks about 2nd position with a space but I didn't know if that meant for
the use of %_$dbad_% or if % $dbad % would have it looking for second
position stuff.


Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183






No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.0 - Release Date: 29/04/2005


Get a Random Row on a HUGE db

2005-04-26 Thread gunmuse



I am wanting to 
display a random page from my site, But I have over 12,000 articles right now 
and we add over 150 per day. What I wound up doing was a Virtual DOS 
attack on my own server because the 40 mb db was being loaded to many 
times.

I have tons of memory 
and a Dell Dual Xeon 2.8 gig.

Can someone think up 
a better way of doing this? I wish Mysql would just bring me back 1 valid 
random row It could be used in so many ways it should just be a part of 
MySql anyway.

?phpini_set("display_errors", '1');header("Pragma: 
private");header("Cache-Control: post-check=0, pre-check=0", false); 
header("Cache-Control: no-cache, 
must-revalidate");require_once("firebase.conf.php");$dbi = new 
DBI(DB_URL);$stmt = "Select * from firebase_content Order By rand() DESC 
Limit 0, 1";$result = $dbi-query($stmt);while($row = 
$result-fetchRow()){$title = $row-title;$cate = 
$row-category;$get = "Select cat_url from firebase_categories 
where cat_name='$cate'";$now = $dbi-query($get);$rows = 
$now-fetchRow();$url = "">$link = $url . 
$title;}header("Location: http://www.prnewsnow.com/$link");exit;/* 
Sudo code that I am trying to create to relieve server stress.function 
randomRow(table, column) {var maxRow = query("SELECT MAX($column) AS maxID 
FROM $table");var randomID;var randomRow;do {randomID = 
randRange(1, maxRow.maxID);randomRow = query("SELECT * FROM $table WHERE 
$column = $randomID");} while (randomRow.recordCount == 0); return 
randomRow;}*/?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: Get a Random Row on a HUGE db

2005-04-26 Thread gunmuse
Thanks for that I implemented to my Random code.  Same problem that select *
portion is just a nightmare.  Remember I selecting 38mb of data when I do
that.

What I want to do is jump to a Valid random row.  Now If I didn't delete
content often that would be easy grab the last autoincremented row_id and
get a random number between 1 and End  Jump to that row to create the link.
Very fast. Zero load

So what I am trying is this.

$last_row =SELECT from firebase_content LAST_INSERT_ID();
$last_row_query = $dbi-query($last_row);
$last_row_result = $row-id;

But what I am seeing is this:

Object id #9

and not the number that is in the database.

What am I sending to this variable that is wrong?



[snip]
I am wanting to display a random page from my site, But I have over
12,000 articles right now and we add over 150 per day.  What I wound up
doing was a Virtual DOS attack on my own server because the 40 mb db was
being loaded to many times.

I have tons of memory and a Dell Dual Xeon 2.8 gig.

Can someone think up a better way of doing this?  I wish Mysql would
just bring me back 1 valid random row  It could be used in so many ways
it should just be a part of MySql anyway.

?php
ini_set(display_errors, '1');
header(Pragma: private);
header(Cache-Control: post-check=0, pre-check=0, false);
header(Cache-Control: no-cache, must-revalidate);
require_once(firebase.conf.php);
$dbi = new DBI(DB_URL);
$stmt = Select * from firebase_content ORDER BY RAND(NOW()) LIMIT 1;
$result = $dbi-query($stmt);
while($row = $result-fetchRow())
{
 $title = $row-title;
 $cate = $row-category;
 $get = Select cat_url from firebase_categories where
cat_name='$cate';
 $now = $dbi-query($get);
 $rows = $now-fetchRow();
 $url = $rows-cat_url;
 $link = $url . $title;
}
header(Location: http://www.prnewsnow.com/$link;);
exit;
/* Sudo code that I am trying to create to relieve server stress.
function randomRow(table, column) {
var maxRow = query(SELECT MAX($column) AS maxID FROM $table);
var randomID;
var randomRow;
do {
randomID = randRange(1, maxRow.maxID);
randomRow = query(SELECT * FROM $table WHERE $column = $randomID);
} while (randomRow.recordCount == 0); return randomRow;
}
*/
?
[/snip]

Try this ...
SELECT * FROM foo ORDER BY RAND(NOW()) LIMIT 1;

12000 rows is not huge at all, so this should be pretty quick

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Get a Random Row on a HUGE db

2005-04-26 Thread gunmuse

What I had to do was do this for my navigation db and not my content db.  My
server can easily handle lots of calls to a 4mb table then tell it to fetch
the content once that has been achieved.

The reason I bringing this up is this seems to be a patched way of doing
this.

If I have 40,000 items in db that get updated and row_ids change for a
catalog and want to randomly display a product.

I should be able to ask Mysql for a Random valid row.  It indexes with a
Primary so it knows what valid at that time.  There is too much jumping
around just to say get random and be fair about it, so no one row comes up
every time or more often that others.


Gunmuse,

SELECT from firebase_content LAST_INSERT_ID()

In that cmd, 'from ...' ain't right.

I didn't understand either what's wrong with ORDER BY RAND() LIMIT 1.

Also check the Perl manual for how to retrieve a single value.

PB

-

[EMAIL PROTECTED] wrote:

Thanks for that I implemented to my Random code.  Same problem that select
*
portion is just a nightmare.  Remember I selecting 38mb of data when I do
that.

What I want to do is jump to a Valid random row.  Now If I didn't delete
content often that would be easy grab the last autoincremented row_id and
get a random number between 1 and End  Jump to that row to create the link.
Very fast. Zero load

So what I am trying is this.

$last_row =SELECT from firebase_content LAST_INSERT_ID();
$last_row_query = $dbi-query($last_row);
$last_row_result = $row-id;

But what I am seeing is this:

Object id #9

and not the number that is in the database.

What am I sending to this variable that is wrong?



[snip]
I am wanting to display a random page from my site, But I have over
12,000 articles right now and we add over 150 per day.  What I wound up
doing was a Virtual DOS attack on my own server because the 40 mb db was
being loaded to many times.

I have tons of memory and a Dell Dual Xeon 2.8 gig.

Can someone think up a better way of doing this?  I wish Mysql would
just bring me back 1 valid random row  It could be used in so many ways
it should just be a part of MySql anyway.

?php
ini_set(display_errors, '1');
header(Pragma: private);
header(Cache-Control: post-check=0, pre-check=0, false);
header(Cache-Control: no-cache, must-revalidate);
require_once(firebase.conf.php);
$dbi = new DBI(DB_URL);
$stmt = Select * from firebase_content ORDER BY RAND(NOW()) LIMIT 1;
$result = $dbi-query($stmt);
while($row = $result-fetchRow())
{
 $title = $row-title;
 $cate = $row-category;
 $get = Select cat_url from firebase_categories where
cat_name='$cate';
 $now = $dbi-query($get);
 $rows = $now-fetchRow();
 $url = $rows-cat_url;
 $link = $url . $title;
}
header(Location: http://www.prnewsnow.com/$link;);
exit;
/* Sudo code that I am trying to create to relieve server stress.
function randomRow(table, column) {
var maxRow = query(SELECT MAX($column) AS maxID FROM $table);
var randomID;
var randomRow;
do {
randomID = randRange(1, maxRow.maxID);
randomRow = query(SELECT * FROM $table WHERE $column = $randomID);
} while (randomRow.recordCount == 0); return randomRow;
}
*/
?
[/snip]

Try this ...
SELECT * FROM foo ORDER BY RAND(NOW()) LIMIT 1;

12000 rows is not huge at all, so this should be pretty quick

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.10.2 - Release Date: 4/21/2005


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Get a Random Row on a HUGE db

2005-04-26 Thread gunmuse

This difference between using a 40 mb table and 4mb table with the same
traffic was a 70 server load versus a .9 server load.  So it was the amount
of data that I was selecting that was choking this feature.


-

[EMAIL PROTECTED] wrote:

Thanks for that I implemented to my Random code.  Same problem that select
*
portion is just a nightmare.  Remember I selecting 38mb of data when I do
that.

What I want to do is jump to a Valid random row.  Now If I didn't delete
content often that would be easy grab the last autoincremented row_id and
get a random number between 1 and End  Jump to that row to create the link.
Very fast. Zero load

So what I am trying is this.

$last_row =SELECT from firebase_content LAST_INSERT_ID();
$last_row_query = $dbi-query($last_row);
$last_row_result = $row-id;

But what I am seeing is this:

Object id #9

and not the number that is in the database.

What am I sending to this variable that is wrong?



[snip]
I am wanting to display a random page from my site, But I have over
12,000 articles right now and we add over 150 per day.  What I wound up
doing was a Virtual DOS attack on my own server because the 40 mb db was
being loaded to many times.

I have tons of memory and a Dell Dual Xeon 2.8 gig.

Can someone think up a better way of doing this?  I wish Mysql would
just bring me back 1 valid random row  It could be used in so many ways
it should just be a part of MySql anyway.

?php
ini_set(display_errors, '1');
header(Pragma: private);
header(Cache-Control: post-check=0, pre-check=0, false);
header(Cache-Control: no-cache, must-revalidate);
require_once(firebase.conf.php);
$dbi = new DBI(DB_URL);
$stmt = Select * from firebase_content ORDER BY RAND(NOW()) LIMIT 1;
$result = $dbi-query($stmt);
while($row = $result-fetchRow())
{
 $title = $row-title;
 $cate = $row-category;
 $get = Select cat_url from firebase_categories where
cat_name='$cate';
 $now = $dbi-query($get);
 $rows = $now-fetchRow();
 $url = $rows-cat_url;
 $link = $url . $title;
}
header(Location: http://www.prnewsnow.com/$link;);
exit;
/* Sudo code that I am trying to create to relieve server stress.
function randomRow(table, column) {
var maxRow = query(SELECT MAX($column) AS maxID FROM $table);
var randomID;
var randomRow;
do {
randomID = randRange(1, maxRow.maxID);
randomRow = query(SELECT * FROM $table WHERE $column = $randomID);
} while (randomRow.recordCount == 0); return randomRow;
}
*/
?
[/snip]

Try this ...
SELECT * FROM foo ORDER BY RAND(NOW()) LIMIT 1;

12000 rows is not huge at all, so this should be pretty quick

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.10.2 - Release Date: 4/21/2005


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: zip code search within x miles

2005-04-25 Thread gunmuse
Http://www.gunmuse.com

Ok I use a storelocator.

First if you have 8000 + records it becomes an issue.  BUT  Lat and long is
in minutes and minutes can be used to estimate miles.  By Breaking down the
lat and long,  Breaking down the Zip to a two digit prefix 88254 becomes 88
for indexing (Because the post offices goes in order folks with some
exceptions) Then with a wide lasso you can rope your results to do your math
check with.  Break your lat and long fields up in hours minutes and seconds
and filtering down becomes very easy to do.

Learning to read a map before determining the key and distance calculation
would help better understand this problem.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Richard Lynch [mailto:[EMAIL PROTECTED]
Sent: Monday, April 25, 2005 12:05 AM
To: Hank
Cc: MySql
Subject: Re: zip code search within x miles


On Tue, April 19, 2005 8:55 am, Hank said:
 Talk about over complicating things... here's the above query simplifed.

 I can not figure out why they were self joining the table three times:

 SELECT b.zip_code, b.state,
(3956 * (2 * ASIN(SQRT(
POWER(SIN(((a.lat-b.lat)*0.017453293)/2),2) +
COS(a.lat*0.017453293) *
COS(b.lat*0.017453293) *
POWER(SIN(((a.lng-b.lng)*0.017453293)/2),2) AS distance
 FROM zips a, zips b
 WHERE
a.zip_code = '90210'
 GROUP BY distance
 having distance = 5;

You'd have to time it, and *MAYBE* with enough indices this will all work
out, but you'd probably be better off doing two queries.

One to look up the long/lat for 90210, and another on just zips to
calculate the distance.

Benchmark on your own hardware and see for yourself.  I could be 100% wrong.

--
Like Music?
http://l-i-e.com/artists.htm


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Client Side Query cache

2005-04-18 Thread gunmuse
Linux does some sort of page caching automatically and its the reason for
there never being any Free memory in a Linux system.  So if you read your
db via XML and php and develop a page from that using a CSS style sheet
Linux will cache it as a Page.(Took us forever to catch that one, something
to do with CSS and no one really knows the answer that we have seen)

As for a query cache edit your My.cnf and
query_cache_type 1   2 = Cache only if script says to(my suggested setting,
make your coders code for caching instead of catch all settings)

This does cause some grief if the data changed you can see a cached result
instead.(Even though your not supposed to).

Also if your running in PPC feeds you can't cache those as you will hit
their timeouts and get no redirect results.

Our PPC XML Feed (http://www.firebasesofware.com) only allows the link to be
valid for 3 minutes before your visitor will be redirected to our front
door.  This is not a traffic grab.  Just 94% of cached results are typically
fraud clicks so we don't allow the caching of results.  We are more cautious
than most (they use 5 minutes) only because we have a very high paying feed
and it attracts the low life's of the internet world.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Mister Jack [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 14, 2005 4:49 AM
To: mysql@lists.mysql.com
Subject: Client Side Query cache


Hi,

I was wondering if there is any query cache code/lib somewhere to
cache certains queries ?
I'm always doing the same queries, (and the result never change, so I
could spare the round-trip to the server), but caching each tine the
data for it is a bit of work.
Thanks, for your suggestions

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Relative Numeric Values

2005-04-17 Thread gunmuse

Wouldn't creating completely unique keys for every animal be a waste of good
indexing?

It doesn't matter if you 30 or 3000 animals start with the same 3 letters if
your using a 3 letter key for speed. As long as you avoided searching
through 30,000,000 records.

The method you described is of no speed benefit.  If you have 30,000,000
records and wind up with 30,000 keys as a result that is a speed
improvement.  Putting 30,000,000 keys in there all your doing is limiting
the amount of characters searched and not the records searched.

Indexing everything only slows the MySql down.

Stick with your original plan, and reduce to 2 characters for your index if
the speed isn't what your looking for still, or throw hardware at the
problem at that point.  That will reduce the number of records for the first
glance at the index.

My search engine on a small dual Xeon runs through 1.7 million records with
a 2 letter index for keywords in about .2 seconds  It only has 8142 keys in
the 2 letter index.  And I am crawling about 8000 pages a day adding content
without seeing a speed drop at this point.

When we get to a point of bottlenecking on searches I intend to make a index
jumping call.

Find * where 2 letter index equals 'ab' and 3 letter index equals 'abcd'

I am sure there will be a better way to write that because at that time I am
certain abcd may reside on different servers.



Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Kim Briggs [mailto:[EMAIL PROTECTED]
Sent: Saturday, April 16, 2005 10:23 PM
To: David Blomstrom
Cc: mysql@lists.mysql.com
Subject: Re: Relative Numeric Values


David,

In reading through miscellaneous database design text on the web, I
read just the other day that you should not try to include meaningful
data in your key values.  I assume there will be some kind of lookup
tables for species, phylum, whatever.  Trying to make your key field
smart seems like way too much overhead and complexity.  I'm
wondering why, if the database is enormous, are you being so short and
cryptic with the user-friendly values?

my $.02
KB

On 4/16/05, David Blomstrom [EMAIL PROTECTED] wrote:
 I think my question is more oriented towards PHP, but
 I'd like to ask it on this list, as I suspect the
 solution may involve MySQL.

 I'm about to start developing an enormous database
 focusing on the animal kingdom and want to find a key
 system more user friendly than the traditional
 scientific name.

 So imagine instead a page with the following in the
 head section:

 $AnimalID = 'canlup';

 This page displays information on the wolf, based on
 the first three letters of its genus and species name,
 Canis lupus.

 Now imagine a page with this value:

 $AnimalID = 'bal';

 This page displays information on the whale family
 Balaenidae. But what about the whale family
 Balaenopteridae, which begins with the same three
 letters?

 I could solve this problem by adding a numerical key
 to my database and displaying the following:

 $AnimalID = 'bal23';
 $AnimalID = 'bal24';

 The problem with this is that it makes it much harder
 to work with my data. When tweaking a page or writing
 a script, I can easily remember that bal = Balaenidae,
 but I can't possibly remember which numeral is
 associated with each mammal family. Also, what happens
 if I add or subtract rows from my database table, and
 the above values suddenly change to bal27 and bal28?

 So here's what I think I'd like to do:

 $AnimalID = 'canlup1';
 $AnimalID = 'bal1';
 $AnimalID = 'bal2';

 The page with canlup1 will display the FIRST (and
 only) instance of canlup in the database - the wolf.

 The page with bal1 will display the first instance of
 bal, which will always be Balaenidae, whether the
 absolute value is bal27 or bal2884. A page with bal2
 will always display the next mammal family that begins
 with bal, Balaenopteridae.

 So I THINK all I need to do is create a variable that
 reflects a particular value's ordinal position in a
 database...
 abc1
 abc2
 abc3, etc.

 Plus, I'll have to join two or three fields together
 to form a key; e.g. animals.species + animals.numerals

 Does anyone know how I can do this? Thanks.

 __
 Do you Yahoo!?
 Plan great trips with Yahoo! Travel: Now over 17,000 guides!
 http://travel.yahoo.com/p-travelguide

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: zip code search within x miles

2005-04-15 Thread gunmuse
We convert the zip code into a Lat and long.  Run the math looking for all
other zips in that area, then convert that back to lat long for a mileage
calculation of each.

I know there's a better way to do this we just haven't seen the benefit in
rewriting it now.

Watch PHP a lot of this in coming out in functions, and Perl already has
some functions to do this I believe you just need the db.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Scott Haneda [mailto:[EMAIL PROTECTED]
Sent: Friday, April 15, 2005 4:38 PM
To: MySql
Subject: zip code search within x miles


How are sites doing the search by zip and coming up with results within x
miles?  Is there some OSS zip code download that has been created for this?
--
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: zip code search within x miles

2005-04-15 Thread gunmuse
I have a copy of the Zip Code db for MySql.  Its a few years old but should
be 99% accurate compared to new ones.


Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Scott Gifford [mailto:[EMAIL PROTECTED]
Sent: Friday, April 15, 2005 6:01 PM
To: Scott Haneda
Cc: MySql
Subject: Re: zip code search within x miles


Scott Haneda [EMAIL PROTECTED] writes:

 How are sites doing the search by zip and coming up with results within x
 miles?  Is there some OSS zip code download that has been created for
this?

Zipdy does most of what you want; it needs to be modified to support
MySQL instead of PostgreSQL, but that shouldn't be too hard.  It also
has the great circle function you need to calculate the distances
correctly.  You can get it from:

http://www.cryptnet.net/fsp/zipdy/

If you're using Perl, Geo::PostalCode works very well, though it
doesn't use an SQL database at all:

http://search.cpan.org/~tjmather/Geo-PostalCode-0.06/lib/Geo/PostalCode.
pm

---ScottG.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Boolean searches

2005-04-08 Thread gunmuse



When using the 
Boolean search capabilities, I am finding it terribly slow. Since I am 
sure MySql uses this on their own site. What version of MySql is the 
fastest at this 5.0?

I am using the 
PHPMySearch and I really think he did fair job on the crawler part. I 
built an XML converter for the results so I can now crawl websites and output 
XML its just the search on even 5000 rows (20mb of data) is very 
slow.

Indexing seems to 
done properly to support this. I played with tweaking the character 
settings of the FULLTEXT search took it to 5 tried it at 3 no 
luck.

I just think it 
should be faster than it is.

The default indexes 
are.



  
  
URL 
UNIQUE 
5451 
 
 
URL 
  
expiresFlag 
INDEX 
1 
 
 
expiresFlag 
  
title 
FULLTEXT 
1 
 
 
title 
  
keywords 
  
body_1 
FULLTEXT 
1 
 
 
body_1 
  
body_2 


Maybe his PHP is 
querying the database wrongly or is out of date. I am running 4.1.8 Mysql 
What would be a good example on how to query the db for bring back ranked 
results based on what it found?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: WARNING!!!! abuser on this list?

2005-04-08 Thread gunmuse
URL  UNIQUE  5451   URL
expiresFlag  INDEX  1   expiresFlag
title  FULLTEXT  1   title
keywords
body_1  FULLTEXT  1   body_1
body_2

Whoops on the html copy, calm down there guy.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: l'[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, April 08, 2005 11:52 AM
To: mysql@lists.mysql.com
Subject: WARNING abuser on this list?



When I opened the  message send by:
[EMAIL PROTECTED]
subject: Boolean searches


a pop up box appeared stating

Connect to 70.84.29.164
Web host manager
Username mymachine/administrator
password:



Has anybody encountered this problem when you cliked on his email
Should we FLAME that guy?
In the mean time I am sending an email to [EMAIL PROTECTED] owning that
IP address.


Here is the headers of his email:

Received: (qmail 7362 invoked by uid 109); 8 Apr 2005 17:13:25 -
Mailing-List: contact [EMAIL PROTECTED]; run by ezmlm
List-ID: mysql.mysql.com
Precedence: bulk
List-Help: mailto:[EMAIL PROTECTED]
List-Unsubscribe:
mailto:[EMAIL PROTECTED]
List-Post: mailto:mysql@lists.mysql.com
List-Archive: http://lists.mysql.com/mysql/182353
Delivered-To: mailing list mysql@lists.mysql.com
Received: (qmail 7312 invoked from network); 8 Apr 2005 17:13:24 -
Received-SPF: pass (lists.mysql.com: local policy)
Reply-To: [EMAIL PROTECTED]
From: [EMAIL PROTECTED]
To: Mysql mysql@lists.mysql.com
Subject: Boolean searches
Date: Fri, 8 Apr 2005 11:13:13 -0600
Message-ID: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: multipart/related;
 boundary==_NextPart_000_0098_01C53C2B.F4751020
X-Priority: 3 (Normal)
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook IMO, Build 9.0.6604 (9.0.2911.0)
Importance: Normal
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2527
X-PopBeforeSMTPSenders:
[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED],dealerfin
[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED]
.com,gunmuse,[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED]
e.com,[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED],paymen
[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED]
,redmoon,[EMAIL PROTECTED],[EMAIL PROTECTED],[EMAIL PROTECTED]
gunsales.com
X-AntiAbuse: This header was added to track abuse, please include it with
any abuse report
X-AntiAbuse: Primary Hostname - pistol.gunmuse.us
X-AntiAbuse: Original Domain - lists.mysql.com
X-AntiAbuse: Originator/Caller UID/GID - [0 0] / [47 12]
X-AntiAbuse: Sender Address Domain - gunmuse.com
X-Source:
X-Source-Args:
X-Source-Dir:
X-Virus-Checked: Checked
X-Spam-Checker-Version: SpamAssassin 3.0.2 (2004-11-16) on c.spam
X-Spam-Status: No, score=1.1 required=5.0 tests=DNS_FROM_AHBL_RHSBL,
 HTML_MESSAGE,HTML_TAG_EXIST_TBODY,NORMAL_HTTP_TO_IP,NO_REAL_NAME,
 WEIRD_PORT autolearn=disabled version=3.0.2
X-Spam-Level: *

x-html

thanks

Laurie




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Boolean searches

2005-04-08 Thread gunmuse
MY first post had html in it where I posted my indexin that I am using.


URL  UNIQUE  5451   URL
expiresFlag  INDEX  1   expiresFlag
title  FULLTEXT  1   title
keywords
body_1  FULLTEXT  1   body_1
body_2

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183

  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: Friday, April 08, 2005 11:13 AM
  To: Mysql
  Subject: Boolean searches


  When using the Boolean search capabilities, I am finding it terribly slow.
Since I am sure MySql uses this on their own site.  What version of MySql is
the fastest at this  5.0?

  I am using the PHPMySearch and I really think he did fair job on the
crawler part.  I built an XML converter for the results so I can now crawl
websites and output XML its just the search on even 5000 rows (20mb of data)
is very slow.

  Indexing seems to done properly to support this.  I played with tweaking
the character settings of the FULLTEXT search took it to 5  tried it at 3 no
luck.

  I just think it should be faster than it is.

  The default indexes are.
  Maybe his PHP is querying the database wrongly or is out of date.  I am
running 4.1.8 Mysql What would be a good example on how to query the db for
bring back ranked results based on what it found?
  Thanks
  Donny Lairson
  President
  29 GunMuse Lane
  P.O. box 166
  Lakewood NM 88254
  http://www.gunmuse.com
  469 228 2183


RE: mysql_query Looping

2005-04-08 Thread gunmuse
We overcame that problem to infinite levels with our Blog Software.  Instead
of loops we use Anchor Points in the url to tell the navigation where it
was at all times.  This allows for a more dynamic navigation system as you
can have Nav trees that not Expand with more subcatorgies but also collapse
back into a single point.

Example

http://mydomain.com/CAT1/CAT2/CAT3/CAT4/ARTICLE1.html

Now You have also a tree like
http://mydomain.com/CAT1/CAT4/Article1.html
http://mydomain.com/CAT1/CAT3/Article1.html

There is no duplication of data but Say a privacy policy or Contact info
page that needs to be everywhere doesn't need multiple links in the db to
make it happen because we are looking at two anchor points.

CAT1 and Article1.html  What's in the middle doesn't matter because the its
not on that page at that time. (YET IT IS IN THE URL)  This means you can
build freely pages without regard to where they need to go or how to move
them later.  As for your Expanding TREE (Like DHTML or JavaScript) for the
next level down to give it a windows appearance.  That would simply be a
matter of calling the next anchor points on mouse over and showing the
display Some javascript(DHTML) and CSS styling).

Its a very complicated Nav system but it allows us to be so versatile when
we decide to use third party programs we can create a Nav structure within
our domain's website that will run it entirely all from a clean interface no
coding required.  Just input urls and hit enter.

Here's some links to our software its free to use but not GPL as we have
done some ground breaking stuff with it.  You may not want to try and
reinvent the wheel when all you need to do is create a CSS style sheet on
our system to do a mouse over menu.

http://www.firebasesoftware.com/firebase_downloads/firebase2.0_client_linux.
zip
http://www.firebasesoftware.com/firebase_downloads/firebase2.0_client_window
s.zip

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Celebrity Institute [mailto:[EMAIL PROTECTED]
Sent: Friday, April 08, 2005 1:32 PM
To: mysql@lists.mysql.com
Subject: mysql_query Looping


Hi, just got intro to the list and i hope I can find some help here.

I'm trying to figure out the true syntax of a project im working on,
basicly theirs a menu on my site that has several catagoies and in those
catagories are secondary, and some times 3rd level and 4th level sub
lists, i want to do a on click open up and ive got a good script for on
click fold out lists but its the SQL syntax loops im not sure how to
configure so im trying to figure out at this point how to make a nested
list of the items. Then I'll worry about colasping them.

heres the psudo code im useing as my guide for my layout



Quote

ADMIN SIDE

Category Tool
We would need to set a order of display for the category, and where its
bound, (main catagoies would be bound to Main Menu, while subs would be
bound to their master).

Code wise we need something like

if Master = Main Menu display these entries order by Order of
display for the main link and then to show the sub links we would need
a nested loop to the effect

recursive if master = Thisentry[x] display these entries order by
Order of display


1. Beauty and Health
1.1. Cosmetics
1.2. Diet  Nutrition
1.3. Fashion
1.4. Fitness
2. Computer/Electronics
2.1. Software
2.2. Hardware
2.3. Internet
2.4. Photography
2.5. Wireless
2.6. Audio
2.7. Video
3. Home and Child Care
3.1. Art and Decoration
3.2. Career/Work
3.3. Eating And Dinning
3.4. Education
3.5. Gifts
3.6. Household
3.6.1. Bedding
3.6.2. Flooring
3.6.3. Furniture
3.6.4. Houseware/Appliances
3.6.5. Gaardening
3.6.6. Tools
3.7. Pet
3.8. Staffing Services
4. On The Go
4.1. Autos
4.2. Planes
4.3. Travel  Getaways
4.4. Yacts
5. Recreation
5.1. Movies
5.2. Music
5.3. TV
5.4. Radio
5.5. Reading
5.6. Games
6. Services
6.1. Business
6.2. Charities
6.3. Finance
6.4. Insurance
6.5. Legal
6.6. Medical
6.7. Real Estate
7. Shopping (Todays Special and Hot Stuff sub areas)
7.1. Home
7.2. Apparel and Accessories
7.3. Beauty and Health
7.4. Books, Movies  Music
7.5. Computing and Office
7.6. Gifts, Flowers  Gourmet
7.7. Jewelry  Watches
7.8. Sports  Outdoors
7.9. Toys, Kids  Baby
7.10. A-Z Store Directory
8. Sports
8.1. Clothing and Gear
8.2. Equipment
9. On The Runway



this is pre-coding im basicly working on my design documentation i
know what i want in psudo-code and am trying to figure out what i need
to do in real code to get there.

Basicly i know how i need to set up the DB but am unsure on just how
precisly to properly display that data in a format similar to the one
above where's some titles are sub section of a link or sub sections of a
sub section something to the effect of

code

FW: GWAVA Sender Notification (Spam)

2005-04-08 Thread gunmuse



This is what I call WAY OVER 
REACTING.

This member turned 
my email into the spam report immediately without thinking. Now I am 
recieving these.

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 

-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]Sent: Friday, April 08, 2005 12:51 
PMSubject: GWAVA Sender Notification 
(Spam) 

  
  

  
A message sent by you was blocked by GWAVA - Content protection for Novell 
GroupWise. 
The message was blocked for the following reason(s): 

  Spam 
The message contained the following information:


  
  
Subject:
RE: WARNING abuser on this list?
  
From:
[EMAIL PROTECTED]
  
Recipient(s):
[No To Addresses] [No Cc Addresses] [EMAIL PROTECTED], 
  [EMAIL PROTECTED] 
The following information details the events that prevented delivery of this 
message:


  
  
Event
Details
  
Spam
  
The message was identified as potential 
  spam 


RE: Performance Tuning - Table Joins

2005-04-04 Thread gunmuse
Your not indexing properly this should be a blink of a search.  Or your
looping your loops when you search.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: mos [mailto:[EMAIL PROTECTED]
Sent: Monday, April 04, 2005 1:30 PM
To: MySQL list
Subject: Re: Performance Tuning - Table Joins


At 12:22 PM 4/4/2005, you wrote:
I have been struggling to maintain decent performance on a web/database
server for a good 6 months now due to MySQL performance issues. I have
decided that my best option at this point is to take it to the list, so in
advance, I thank you all for taking a look.

There is no error messages that can be posted, so I will try and describe
what's happening as best I can.

I am joining 3 tables in one query. I have had numerous people examine the
queries and all have given their stamp of approval. What happens when I
run it is MySQL takes the processor for a ride, spiking it to 100% until I
restart mysqld.

The tables range from 50,000 to 85,000 records, and the join is only
supposed to return 1 record.

My question to you is this: are there changes I can make to the
configuration to improve performance? --or-- is data de-normalization my
best option?

Is there any more information you need from me to answer this question?

Current setup:
 2.4ghz Pentium 4, 1gb ram, 360gb 4-disc raid 5 array w/ 3ware
 chassis and card, fedora core 3 w/ all patches and updates, selinux
 -disabled-,  mysql 4.1.10a, MyISAM table format.

Again, thank you all in advance,
Jason


Jason,
 Try running Analyze Table on each of the tables. This will
rebalance the index and get rid of deleted space. Returning one row from a
3 table join should take only ms if you're using indexes properly.

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: license question

2005-03-30 Thread gunmuse
MySql loses money from many vendors on this very point.  Of which they do
not budge.

We have a Point of Sale software company who can distribute Oracle cheaper.
They only require a percentage of the final product price that their product
is packaged with.  When the company explained they would rather use MySql an
pay them the same rates MySql refused.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Daevid Vincent [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 30, 2005 10:18 PM
To: 'Pat Ballard'; mysql@lists.mysql.com
Subject: RE: license question


As my company and I understand it, if you intend on distributing mySQL on
this appliance and the appliance is a sealed box with your own proprietary
code (like PHP or C or Java or whatever) that interfaces to the
STOCK/Untouched RDBMS, you NEED a mySQL Commercial License.

This license is a ridiculous $600 per unit which makes it completely
unrealistic for any large scale deployment!!! I mean, I don't mind paying
someone for their work, but I was thinking more like $50 per unit, not  10
times that.

If someone from mySQL can clarify that would be great, but this is how I
read the license and that's why we've stuck to v4.0.18 which was GPL.

http://www.mysql.com/company/legal/licensing/opensource-license.html

Our software is 100% GPL (General Public License); if yours is 100% GPL
compliant, then you have no obligation to pay us for the licenses. 

Free use for those who never copy, modify or distribute. As long as you
never distribute the MySQL Software in any way, you are free to use it for
powering your application, irrespective of whether your application is under
GPL license or not.

If you are a private individual you are free to use MySQL software for your
personal applications as long as you do not distribute them. If you
distribute them, you must make a decision between the Commercial License and
the GPL.


http://www.mysql.com/company/legal/licensing/commercial-license.html

Building a hardware system that includes MySQL and selling that hardware
system to customers for installation at their own locations.

If you include the MySQL server with an application that is not licensed
under the GPL or GPL-compatible license, you need a commercial license for
the MySQL server.



 -Original Message-
 From: Pat Ballard [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, March 30, 2005 4:03 PM
 To: mysql@lists.mysql.com
 Subject: license question

 Suppose i distribute MySQL-4.1 with an appliance,
 which is a sealed x86 machine running a Linux
 distribution made by another entity (ok, it's Red
 Hat). I don't write any code that's directly linked to
 MySQL, I'm only using the existing php-mysql, etc.,
 packages already provided by the distribution, plus
 some third-party apps that are under GPL and link to
 MySQL (applications that access MySQL, not written by
 me, but are Open Source GPL projects off SourceForge
 and other places - i just bundle them with the
 appliance).
 Any code that I write personally is PHP and sits on
 top of the php-mysql module provided by Red Hat.

 The end-user has no direct visibility to the database,
 in fact, the end-user might never know it's MySQL -
 all that is visible is the PHP interface, via Apache.

 In this case, what's the license? Is MySQL still free
 (under GPL)?

 --
 Pat Ballard


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow access Apache + PHP + MySQL

2005-03-26 Thread gunmuse
Your request is just to broad to answer.  There are lots of places to look
but I doubt php is causing your slowdowns.  Poorly written php code will
cause slowdowns not the server compiled php though.

Lots of server monitoring script monitoring and logging you can start to
trace down problems like this.

You also missed an important part of any server with slowdowns and that is
the network itself can cause load and grief.

I can tell you that most default setups are very tight on resource use, and
the helpful information others will point you to on the web just wasn't
written for 4 gig ram and 64bit servers.  More along the line of a nice
high-performance PIII.  So be careful what you read.  Learn the math to do
it right and not the exampled settings you see in forums.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Andre Matos [mailto:[EMAIL PROTECTED]
Sent: Saturday, March 26, 2005 8:47 AM
To: 'mysql@lists.mysql.com '
Subject: Slow access Apache + PHP + MySQL


Hi List,

I have 4 web based systems developed using PHP4 and MySQL accessed for 10
users. The Web Server and Database Server were running ok on a Mac OS X 10.3
G4 dual. However, since we move to a new server, the access becomes very
slow. This was not expected since we move to a 64 bits high performance
machine.

Now, we are using MySQL version 4.1.9 with Apache 2.0.52 and PHP 4.3.10, all
compiled and running on a Linux Fedora X86_64.

My first thought was the systems, but since I have not changed 3 of the 4
systems, I start to look to the database. I monitored the MySQL using MySQL
Administrator, but I couldn't identify any problem. It looks ok, but not
completely sure if really is.

The system administrator told me that could be the PHP session, but again,
he also was not complete sure about this.

It is a big problem since I need to check in 3 places: MySQL, Apache, or
PHP.

Does anyone had this kind of problem or has any suggestion or direction to
help me to identify and solve this issue?

Any help will be appreciated!!!

Thanks.

Andre

--
Andre Matos
[EMAIL PROTECTED]

--
Andre Matos
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Heap table says its Full?

2005-03-25 Thread gunmuse
I took a guess at that yesterday.

I left the
tmp_table_size 128M
added the line
max_heap_table_size 500M

But to no avale. Still limited in the number to 12.7M

I am using 4.1.8 as installed by Cpanel.



Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Harrison Fisk [mailto:[EMAIL PROTECTED]
Sent: Friday, March 25, 2005 1:30 AM
To: [EMAIL PROTECTED]
Cc: Mysql
Subject: Re: Heap table says its Fuul?


Hi,

On Mar 24, 2005, at 6:07 PM, [EMAIL PROTECTED] wrote:

 Mysql is telling me my Heap table is full.  Now I set it to 128M.
  
 my.cnf line
 tmp_table_size = 128M

Try changing the setting called max_heap_table_size.  tmp_table_size
only has to do with internal temporary tables that are used to resolve
a query (ie. when you see a 'Using Temporary' in the EXPLAIN)

  
   The Table filled up at 12.7M  This appears to be very close to 128M
 with a decimal out of place. 
  Did I find a Bug? 
  Am I doing something wrong?
 Is the tmp_table_size a PER TABLE or for all mysql heap tables?
  
 I can't seem to get past this 12.7M mark I need 128M of heap to run my
 looping searches with.
  
  
 CREATE TABLE `fsearch_searchheap` (
   `searchAffid` int(11) NOT NULL default '0',
   `searchKeyword` varchar(100) NOT NULL default '',
   `searchReferrer` varchar(100) NOT NULL default '',
   `searchIp` varchar(15) NOT NULL default '',
   KEY `searchAffid` (`searchAffid`),
   KEY `searchKeyword` (`searchKeyword`)
 ) ENGINE=MEMORY DEFAULT Select * from fsearch_search;
  
  

Regards,

Harrison

--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com

Get a jumpstart on MySQL Cluster --
http://www.mysql.com/consulting/packaged/cluster.html


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Heap table says its Full?

2005-03-25 Thread gunmuse
Ok, Never mind my last statement because I didn't change anything and it
worked this morning.

Next problem I copied a 21MB db to the heap and it reported 248M of data
once there?

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, March 25, 2005 8:12 AM
To: Harrison Fisk
Cc: Mysql
Subject: RE: Heap table says its Full?


I took a guess at that yesterday.

I left the
tmp_table_size 128M
added the line
max_heap_table_size 500M

But to no avale. Still limited in the number to 12.7M

I am using 4.1.8 as installed by Cpanel.



Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Harrison Fisk [mailto:[EMAIL PROTECTED]
Sent: Friday, March 25, 2005 1:30 AM
To: [EMAIL PROTECTED]
Cc: Mysql
Subject: Re: Heap table says its Fuul?


Hi,

On Mar 24, 2005, at 6:07 PM, [EMAIL PROTECTED] wrote:

 Mysql is telling me my Heap table is full.  Now I set it to 128M.
  
 my.cnf line
 tmp_table_size = 128M

Try changing the setting called max_heap_table_size.  tmp_table_size
only has to do with internal temporary tables that are used to resolve
a query (ie. when you see a 'Using Temporary' in the EXPLAIN)

  
   The Table filled up at 12.7M  This appears to be very close to 128M
 with a decimal out of place. 
  Did I find a Bug? 
  Am I doing something wrong?
 Is the tmp_table_size a PER TABLE or for all mysql heap tables?
  
 I can't seem to get past this 12.7M mark I need 128M of heap to run my
 looping searches with.
  
  
 CREATE TABLE `fsearch_searchheap` (
   `searchAffid` int(11) NOT NULL default '0',
   `searchKeyword` varchar(100) NOT NULL default '',
   `searchReferrer` varchar(100) NOT NULL default '',
   `searchIp` varchar(15) NOT NULL default '',
   KEY `searchAffid` (`searchAffid`),
   KEY `searchKeyword` (`searchKeyword`)
 ) ENGINE=MEMORY DEFAULT Select * from fsearch_search;
  
  

Regards,

Harrison

--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com

Get a jumpstart on MySQL Cluster --
http://www.mysql.com/consulting/packaged/cluster.html


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Heap table says its Fuul?

2005-03-24 Thread gunmuse



Mysql is telling me 
my Heap table is full. Now I set it to 128M.

my.cnf 
line
tmp_table_size = 
128M

 The Table 
filled up at 12.7M This appears to be very close to 128M with a decimal 
out of place. 
Did I find a 
Bug? 
Am I doing something 
wrong?
Is the tmp_table_size a PER TABLE or for all 
mysql heap tables?

I can't seem to get 
past this 12.7M mark I need 128M of heap to run my looping searches 
with.


CREATE TABLE 
`fsearch_searchheap` ( `searchAffid` int(11) NOT NULL default 
'0', `searchKeyword` varchar(100) NOT NULL default '', 
`searchReferrer` varchar(100) NOT NULL default '', `searchIp` 
varchar(15) NOT NULL default '', KEY `searchAffid` 
(`searchAffid`), KEY `searchKeyword` (`searchKeyword`)) 
ENGINE=MEMORY DEFAULT Select * from 
fsearch_search;




ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: Auto loading a table

2005-03-14 Thread gunmuse
Thanks that gives me options,  yes table was already created what I wanted
was for the table itself to know that when MySql reloads to go an get all
from another table.

I was understanding this was just something I did when I created the table
the first time as a Character of the table to know on load select * from
Test2

This allows me to maintain stability and speed at the same time.  While I
write to 2 tables I always read from 1 and reading is done 95 times more
often at least.

I have set the My.cnf to 128M for memory tables as default but it appears I
still stop at the 10M limit on memory tables anyway.  Should I add something
into the creation of the table to override the defaults locally with that
table?

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED]
Sent: Monday, March 14, 2005 8:28 AM
To: [EMAIL PROTECTED]
Cc: Gleb Paharenko; mysql@lists.mysql.com
Subject: Re: Auto loading a table


[EMAIL PROTECTED] wrote:

  [Donny Lairson] Quick bump I never got an answer
 
I have a table fsearch_temp I use it as a memory table to keep things
  light and fast but after a restart I want to repopulate some data
  automatically.  So I thought I just said load from TEST2 which would by a
  myisam table containing the hardbackup I need.  But obviously not the
  right way of saying this.  I must be reading the instructions wrong can
  someone clarify this for me?
snip

Which instructions are you reading?  I expect you get a syntax error, right?
  From the manual http://dev.mysql.com/doc/mysql/en/create-table.html, the
correct syntax is

   CREATE [TEMPORARY] TABLE [IF NOT EXISTS] tbl_name
 [(create_definition,...)]
 [table_options] [select_statement]

Gleb Paharenko wrote:
 Hello.

) ENGINE = MEMORY LOAD FROM TEST2 DEFAULT CHARSET = utf8 AUTO_INCREMENT =0

 You should use select statement, not LOAD. For example:
  CREATE TABLE . SELECT * FROM TEST2;

 And table options like DEFAULT CHARSET you should put before select
statement.
 See:
   http://dev.mysql.com/doc/mysql/en/create-table.html


I think this is accurate but misleading.  CREATE ... SELECT adds columns
from the SELECT to the columns defined in the CREATE, so you cannot fix this
simply by getting the last line right.  You have to leave out the column
definitions.  On the other hand (from the manual page you cite),

   CREATE TABLE ... SELECT does not automatically create any indexes  for
   you. This is done intentionally to make the statement as flexible as
   possible. If you want to have indexes in the created table, you should
   specify these before the SELECT statement...

so you do need to keep the index definitions.  Thus, assuming fsearch_temp's
create_definition matches that of table TEST2, to create fsearch_temp as a
copy of TEST2, you would

   CREATE TABLE fsearch_temp
( PRIMARY KEY (fsearchId), KEY fsearchIp (fsearchIp)
) ENGINE = MEMORY DEFAULT CHARACTER SET utf8
   SELECT * FROM TEST2;

but I don't think this is what you want, either.

First, there is this caveat (from the manual):

   Some conversion of column types might occur. For example, the
   AUTO_INCREMENT attribute is not preserved, and VARCHAR  columns can
become
   CHAR columns.

To avoid that, you need to first CREATE the table, then populate it with a
copy of TEST2 in a separate INSERT ... SELECT statement.  See the manual for
details http://dev.mysql.com/doc/mysql/en/insert-select.html.

In any case, MEMORY tables don't go away unless they are dropped.  Only the
rows disappear when mysql stops.  If you've previously created this table
and haven't dropped it, it should still exist as an empty table on startup.
  In that case, you only need to reload the rows.

   INSERT INTO fsearch_temp SELECT * FROM TEST2;

Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Auto loading a table

2005-03-14 Thread gunmuse
Your rudeness is not warranted in any manner.  If the lists answer was to
Hire a consultant with every question what would be the point of peer to
peer help.  The list gets busy and at times and I got excellent responses
from qualified and courteous people, from a simple bump after waiting a
respectful 4 days.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: sol beach [mailto:[EMAIL PROTECTED]
Sent: Sunday, March 13, 2005 3:11 PM
To: [EMAIL PROTECTED]
Subject: Re: Auto loading a table


With free advice, you get what you paid for it.

Nobody here owes you or anyone a response.

If you expect answers, pay a consultant for them.

On Sun, 13 Mar 2005 14:23:40 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:


 [Donny Lairson] Quick bump I never got an answer



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Auto loading a table

2005-03-13 Thread gunmuse


[Donny Lairson] Quick bump I never got an answer


  I have a table fsearch_temp I use it as a memory table to keep things
light and fast but after a restart I want to repopulate some data
automatically.  So I thought I just said load from TEST2 which would by a
myisam table containing the hardbackup I need.  But obviously not the right
way of saying this.  I must be reading the instructions wrong can someone
clarify this for me?

  CREATE TABLE `fsearch_temp` (
  `fsearchId` bigint( 19 ) unsigned NOT NULL AUTO_INCREMENT ,
  `fsearchHost` varchar( 100 ) NOT NULL default '',
  `fsearchSite` varchar( 255 ) NOT NULL default '',
  `fsearchDescription` varchar( 255 ) NOT NULL default '',
  `fsearchUrl1` varchar( 255 ) NOT NULL default '',
  `fsearchUrl2` varchar( 255 ) NOT NULL default '',
  `fsearchUrl3` varchar( 255 ) NOT NULL default '',
  `fsearchUrl4` varchar( 255 ) NOT NULL default '',
  `fsearchTime` int( 11 ) NOT NULL default '0',
  `fsearchIp` varchar( 22 ) NOT NULL default '',
  `fsearchKeyword` varchar( 100 ) NOT NULL default '',
  `fsearchBid` varchar( 11 ) NOT NULL default '0',
  `fsearchClicked` varchar( 6 ) NOT NULL default 'no',
  PRIMARY KEY ( `fsearchId` ) ,
  KEY `fsearchIp` ( `fsearchIp` )
  ) ENGINE = MEMORY LOAD FROM TEST2 DEFAULT CHARSET = utf8 AUTO_INCREMENT =0
  Thanks
  Donny Lairson
  President
  29 GunMuse Lane
  P.O. box 166
  Lakewood NM 88254
  http://www.gunmuse.com
  469 228 2183


Auto loading a table

2005-03-11 Thread gunmuse



I have a table fsearch_temp I 
use it as a memory table to keep things light and fast but after a restart I 
want to repopulate some data automatically. So I thought I just said load 
from TEST2 which would by a myisam table containing the hardbackup I need. 
But obviously not the right way of saying this. I must be reading the 
instructions wrong can someone clarify this for me?

CREATE TABLE `fsearch_temp` ( 
`fsearchId` bigint( 19 ) unsigned NOT NULL AUTO_INCREMENT ,`fsearchHost` varchar( 100 ) NOT NULL default '',`fsearchSite` varchar( 255 ) NOT NULL default '',`fsearchDescription` varchar( 255 ) NOT NULL default '',`fsearchUrl1` varchar( 255 ) NOT NULL default '',`fsearchUrl2` varchar( 255 ) NOT NULL default '',`fsearchUrl3` varchar( 255 ) NOT NULL default '',`fsearchUrl4` varchar( 255 ) NOT NULL default '',`fsearchTime` int( 11 ) NOT NULL default '0',`fsearchIp` varchar( 22 ) NOT NULL default '',`fsearchKeyword` varchar( 100 ) NOT NULL default '',`fsearchBid` varchar( 11 ) NOT NULL default '0',`fsearchClicked` varchar( 6 ) NOT NULL default 'no',PRIMARY KEY ( `fsearchId` ) ,KEY `fsearchIp` ( `fsearchIp` ) 
) ENGINE = MEMORY LOAD FROM TEST2 DEFAULT CHARSET = utf8 AUTO_INCREMENT =0 

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Loading data on startup

2005-02-25 Thread gunmuse



I need to copy data 
from TABLE A to TABLE B(Memory Table)on MySql startup or 
restart.

MySql --init-file on 
startup is obviously something I need to use could I get an example of what a 
sql would look like to start the memory table and completely copy data from 
table A.

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Is there a limit on Auto-increment

2005-02-22 Thread gunmuse



I am using the memory 
table in 4.1 to auto increment is there a limit to how big that number can 
get?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Memory Tables and My.cnf config may not be right

2005-02-22 Thread gunmuse



My Memory table hit 
16Mb and locked up. Is there something in my.cnf that I don't have 
correct. I thought I set it to 128MB memory tables.

max_connections = 3500max_user_connections = 
1500key_buffer = 750Mmyisam_sort_buffer_size = 130Mjoin_buffer_size 
= 128Mread_buffer_size = 1Msort_buffer_size = 
128Mread_rnd_buffer_size = 16Mtable_cache = 28192thread_cache_size = 
1512wait_timeout = 7connect_timeout = 10max_allowed_packet = 
16Mmax_connect_errors = 20query_cache_limit = 8Mquery_cache_size = 
32Mquery_cache_type = 0skip-innodbthread_concurrency = 
8safe-show-databaseinteractive_timeout= 15tmp_table_size = 
12800

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Delete without Overhead on a MEMORY.

2005-02-22 Thread gunmuse



We are getting lots 
of Overhead in our MEMORY table when we delete rows that are to old. So 
How do we delete from the table an not consumer MEMORY that we want 
later?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: Memory Tables and My.cnf config may not be right

2005-02-22 Thread gunmuse

Don't want this to roll to far down the list.


  My Memory table hit 16Mb and locked up.  Is there something in my.cnf that
I don't have correct.  I thought I set it to 128MB memory tables.

  max_connections = 3500
  max_user_connections = 1500
  key_buffer = 750M
  myisam_sort_buffer_size = 130M
  join_buffer_size = 128M
  read_buffer_size = 1M
  sort_buffer_size = 128M
  read_rnd_buffer_size = 16M
  table_cache = 28192
  thread_cache_size = 1512
  wait_timeout = 7
  connect_timeout = 10
  max_allowed_packet = 16M
  max_connect_errors = 20
  query_cache_limit = 8M
  query_cache_size = 32M
  query_cache_type = 0
  skip-innodb
  thread_concurrency = 8
  safe-show-database
  interactive_timeout= 15
  tmp_table_size = 12800

  Thanks
  Donny Lairson
  President
  29 GunMuse Lane
  P.O. box 166
  Lakewood NM 88254
  http://www.gunmuse.com
  469 228 2183


RE: Yeah worked liked a dream

2005-02-22 Thread gunmuse


Ok folks your little words of wisdom crunched our problem out and and we
built a metacrawler that is faster than momma and dogpile because we now
NEVER touch a harddrive while searching.  We will be applying this search to
our Blogging software at firebasesoftware.com by Monday.

  Thanks
  Donny Lairson
  President
  29 GunMuse Lane
  P.O. box 166
  Lakewood NM 88254
  http://www.gunmuse.com
  469 228 2183

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:17 PM
To: Mysql
Subject: Memory Tables and My.cnf config may not be right


My Memory table hit 16Mb and locked up.  Is there something in my.cnf
that I don't have correct.  I thought I set it to 128MB memory tables.

max_connections = 3500
max_user_connections = 1500
key_buffer = 750M
myisam_sort_buffer_size = 130M
join_buffer_size = 128M
read_buffer_size = 1M
sort_buffer_size = 128M
read_rnd_buffer_size = 16M
table_cache = 28192
thread_cache_size = 1512
wait_timeout = 7
connect_timeout = 10
max_allowed_packet = 16M
max_connect_errors = 20
query_cache_limit = 8M
query_cache_size = 32M
query_cache_type = 0
skip-innodb
thread_concurrency = 8
safe-show-database
interactive_timeout= 15
tmp_table_size = 12800

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


Timed truncate of a table.

2005-02-20 Thread gunmuse



I am building a 
temporary memory table to store links in. Instead of having php truncating all 
links older than 10 minutes is there just a way to have the table do itself 
automatically?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Memory table questions

2005-02-20 Thread gunmuse



We are building two 
copies of our commonly used tables. When we have to write something we 
write it Both tables:

TABLE 
A
TABLE B MEMORY table

When we read to run 
our script we only read from he MEMORY TABLE.

My question is how do 
I tell TABLE B toclone TABLE Aafter a reboot automatically. 
That is actually part of the table build itself right? what does that SQL look 
like?

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Lots of searches

2005-02-18 Thread gunmuse



We want to be able to 
search through 1 million words looking for matches and a count of how many times 
they have been searched.

Example. Words 
like:


guns
shotguns
longguns
long guns
shot guns

So I want to search 
all million words for the use of the word 'gun' Then it would return me 
those 5 phrases since the pattern gun is in the phrase.

My question is what 
is the best way to setup the db to make this search as fast as possible. 
Right now its taking 45-50 seconds and I would like to get it down to 4-5 
seconds.

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



RE: Lots of searches

2005-02-18 Thread gunmuse
The memory deal gave me an idea.

A little hotcopy to a Memory table once a day and perform the search from
there.  Since the search is a nicity for the users and not critical to be
100% accurate or up2date.

How do you do a hotcopy with php then?  Never even seen it done.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: Brent Baisley [mailto:[EMAIL PROTECTED]
Sent: Friday, February 18, 2005 1:41 PM
To: [EMAIL PROTECTED]
Cc: Mysql
Subject: Re: Lots of searches


There's nothing you can do except get really fast hard drives. Since
you are searching for random parts of words you can't use an index. So
you're stuck with doing a full table scan. Your entire table probably
won't fit in memory, so you will be reading from disk. Thus, fast disk
drives in a RAID setup.


On Feb 18, 2005, at 1:56 PM, [EMAIL PROTECTED] wrote:

 We want to be able to search through 1 million words looking for
 matches and a count of how many times they have been searched.
  
 Example. Words like:
  
  
 guns
 shotguns
 longguns
 long guns
 shot guns
  
 So I want to search all million words for the use of the word 'gun' 
 Then it would return me those 5 phrases since the pattern gun is in
 the phrase.
  
 My question is what is the best way to setup the db to make this
 search as fast as possible.  Right now its taking 45-50 seconds and I
 would like to get it down to 4-5 seconds.

 Thanks
 Donny Lairson
 President
 29 GunMuse Lane
 P.O. box 166
 Lakewood NM 88254
 http://www.gunmuse.com
 469 228 2183

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A Heartfelt thank you.

2005-02-13 Thread gunmuse
I would just like to say thank you to everyone on the list who helped hunt
down and find quirks while we coded up our latest software.

We are absolutely thrilled with the resulting speed and power of our CMS
software now.  The ability to Optimize peoples websites while still keeping
them dynamic is a huge leap in Search engine optimization.

We are giving away the software for free if you would like to see the final
result download a copy and please give us some comments from your
perspective.

http://www.firebasesoftware.com/firebase_downloads/FB-ReleaseVersion-1-1.zip

Again you have been very helpful and I am sure we finally update our server
to 4.1.8 we will be back with a boat load of questions.  I know we are
interested the sub search functionality of the new MySql.

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183


-Original Message-
From: David Blomstrom [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 13, 2005 3:18 PM
To: mysql@lists.mysql.com
Subject: Re: Where's my ODBC icon?



--- Andrew Pattison [EMAIL PROTECTED] wrote:

 However, when I double-clicked odbccp32.cpl, I was
 rewarded with
 something similar to what I got before.

 Not sure what you are looking for then. The myODBC
 driver should not need configuring, beyond setting
 up data sources, which is exactly what the control
 panel applet does for you. There is no program to
 launch - you configure a data source to allow you to
 access data, then use your ODBC-capable program to
 connect to that data source.

 Start up the ODBC applet and change to the System
 DSN tab. Next, add a new data source which uses the
 myODBC driver to connect to the database you want to
 access via ODBC. Once you have done this, you can
 then connect to the data source from your
 ODBC-capable program using the name of the data
 source.

OK, now I understand it a little better. My ultimate
goal is to extract some data from some GIS files and
import them into MySQL. So it looks like I'm going to
be using ODBC to connect a software program called
GeoClient (which I haven't begun to figure out yet) to
another program called ArcExplorer (which isn't
working for me).

This should be interesting. :)

Thanks.



__
Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!
http://my.yahoo.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Import Excel data into table

2005-01-13 Thread gunmuse
NaviCat

Thanks
Donny Lairson
President
29 GunMuse Lane
P.O. box 166
Lakewood NM 88254
http://www.gunmuse.com
469 228 2183 


-Original Message-
From: Steve Grosz [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 13, 2005 2:56 PM
To: mysql@lists.mysql.com
Subject: Import Excel data into table


Can anyone tell me a good way to import individual column data into a 
table?  Is there a tool to assist with this?

Thanks,
Steve

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Pear abstraction class

2005-01-05 Thread gunmuse



We are using a pear 
abstraction class written by Mohammed J. Kabir Problem is its out of date for 
error reporting. Does anyone know where a current version of this 
is. Hopefully one that would be a plug in exchange for the one we have so 
that we don't have to rewrite the software.

ThanksDonny LairsonPresident29 
GunMuse LaneP.O. box 166Lakewood NM 88254http://www.gunmuse.com469 228 2183 



Heap Help

2004-11-29 Thread gunmuse



I want to put a table 
in Ram (HEAP) with a field of at least 500 characters. I do I do this if 
Blob and text are not allowed?
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


Take a normal Table and Make it a Heap table

2004-11-26 Thread gunmuse



I have to pull in 200 
search rows and store them temporarily in a Table called xmllinks This is 
so I can track the click on the one link of the 200 I bring down. Nothing 
is permanently stored in this table

This is just a normal 
table in a db right now but during peak traffic times it bogs down the 
MySql. What do I have to do to move this one table into a heap of 
ram. I didn't code this software so please give me a little detail as to 
whether this is done at the software side or the MySql side. Of what I 
need to edit to stick this up into ram and verify that its there. I do 
want to verify its actually there somehow.
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


RE: Take a normal Table and Make it a Heap table

2004-11-26 Thread gunmuse
I agree and we are rewriting this application ourselves to accomidate these
types of issues of making it faster faster faster.

But I would like to patch what I have at the same time.  Call me greedy

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: sol beach [mailto:[EMAIL PROTECTED]
Sent: Friday, November 26, 2004 11:25 AM
To: [EMAIL PROTECTED]
Subject: Re: Take a normal Table and Make it a Heap table


When your only tool is a hammer, then all problems are views as nails.

A shovel is a great tool for creating a hole in the ground,
but only when the right end is contacts the ground.

You seems to be using the wrong end of the computer.

With computers, you can have it good, fast, or cheap.
Pick any two ( pay the price in the third).

I wish you luck in re-inventing the wheel  rolling your own custom
SCABALBLE application.

P.S.
Scalability needs to be designed into the archituecture from the start.
It rarely can be bolted together after the bottlenecks are encountered,
because bottlenecks result from inappropriate original design decisions.


On Fri, 26 Nov 2004 11:12:38 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 I have to pull in 200 search rows and store them temporarily in a Table
 called xmllinks  This is so I can track the click on the one link of the
200
 I bring down.  Nothing is permanently stored in this table

 This is just a normal table in a db right now but during peak traffic
times
 it bogs down the MySql.  What do I have to do to move this one table into
a
 heap of ram.  I didn't code this software so please give me a little
detail
 as to whether this is done at the software side or the MySql side.  Of
what
 I need to edit to stick this up into ram and verify that its there.  I do
 want to verify its actually there somehow.

 Thanks
 Donny Lairson
 President
 http://www.gunmuse.com
 469 228 2183



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Take a normal Table and Make it a Heap table

2004-11-26 Thread gunmuse
I ran this and got the following error.  Why doesn't heap support
autoincrement?  Or does it and I need to do something different.

CREATE TABLE `xmllinks2` (
  `rowID` int(11) NOT NULL auto_increment,
  `affiliateID` int(11) NOT NULL default '0',
  `pluginName` varchar(255) NOT NULL default '',
  `linkID` int(11) NOT NULL default '0',
  `linkURL` varchar(255) NOT NULL,
  `bid` decimal(10,4) NOT NULL default '0.',
  `uniqueData` varchar(255) NOT NULL,
  `searchDate` datetime NOT NULL default '-00-00 00:00:00',
  PRIMARY KEY  (`rowID`)
) TYPE=HEAP AUTO_INCREMENT=1425725 ;



#1164 - The used table type doesn't support AUTO_INCREMENT columns

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, November 26, 2004 11:59 AM
To: sol beach
Cc: Mysql
Subject: RE: Take a normal Table and Make it a Heap table


I agree and we are rewriting this application ourselves to accomidate these
types of issues of making it faster faster faster.

But I would like to patch what I have at the same time.  Call me greedy

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: sol beach [mailto:[EMAIL PROTECTED]
Sent: Friday, November 26, 2004 11:25 AM
To: [EMAIL PROTECTED]
Subject: Re: Take a normal Table and Make it a Heap table


When your only tool is a hammer, then all problems are views as nails.

A shovel is a great tool for creating a hole in the ground,
but only when the right end is contacts the ground.

You seems to be using the wrong end of the computer.

With computers, you can have it good, fast, or cheap.
Pick any two ( pay the price in the third).

I wish you luck in re-inventing the wheel  rolling your own custom
SCABALBLE application.

P.S.
Scalability needs to be designed into the archituecture from the start.
It rarely can be bolted together after the bottlenecks are encountered,
because bottlenecks result from inappropriate original design decisions.


On Fri, 26 Nov 2004 11:12:38 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 I have to pull in 200 search rows and store them temporarily in a Table
 called xmllinks  This is so I can track the click on the one link of the
200
 I bring down.  Nothing is permanently stored in this table

 This is just a normal table in a db right now but during peak traffic
times
 it bogs down the MySql.  What do I have to do to move this one table into
a
 heap of ram.  I didn't code this software so please give me a little
detail
 as to whether this is done at the software side or the MySql side.  Of
what
 I need to edit to stick this up into ram and verify that its there.  I do
 want to verify its actually there somehow.

 Thanks
 Donny Lairson
 President
 http://www.gunmuse.com
 469 228 2183



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Take a normal Table and Make it a Heap table

2004-11-26 Thread gunmuse
Can I put a MyISAM table into Ram permanently?

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, November 26, 2004 11:59 AM
To: sol beach
Cc: Mysql
Subject: RE: Take a normal Table and Make it a Heap table


I agree and we are rewriting this application ourselves to accomidate these
types of issues of making it faster faster faster.

But I would like to patch what I have at the same time.  Call me greedy

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: sol beach [mailto:[EMAIL PROTECTED]
Sent: Friday, November 26, 2004 11:25 AM
To: [EMAIL PROTECTED]
Subject: Re: Take a normal Table and Make it a Heap table


When your only tool is a hammer, then all problems are views as nails.

A shovel is a great tool for creating a hole in the ground,
but only when the right end is contacts the ground.

You seems to be using the wrong end of the computer.

With computers, you can have it good, fast, or cheap.
Pick any two ( pay the price in the third).

I wish you luck in re-inventing the wheel  rolling your own custom
SCABALBLE application.

P.S.
Scalability needs to be designed into the archituecture from the start.
It rarely can be bolted together after the bottlenecks are encountered,
because bottlenecks result from inappropriate original design decisions.


On Fri, 26 Nov 2004 11:12:38 -0700, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 I have to pull in 200 search rows and store them temporarily in a Table
 called xmllinks  This is so I can track the click on the one link of the
200
 I bring down.  Nothing is permanently stored in this table

 This is just a normal table in a db right now but during peak traffic
times
 it bogs down the MySql.  What do I have to do to move this one table into
a
 heap of ram.  I didn't code this software so please give me a little
detail
 as to whether this is done at the software side or the MySql side.  Of
what
 I need to edit to stick this up into ram and verify that its there.  I do
 want to verify its actually there somehow.

 Thanks
 Donny Lairson
 President
 http://www.gunmuse.com
 469 228 2183



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Complete php,mysql packages

2004-11-24 Thread gunmuse
Wamp servers are an excellent windows install for It elimininates those
little hiccups


http://www.wampserver.com/en/index.php


Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Danesh Daroui [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 24, 2004 8:59 AM
To: [EMAIL PROTECTED]
Subject: MySQL and PHP


Hi all,

I have problem by using PHP and MySQL. I have installed MySQL Server
4.1.7 on a Linux  machine with Apache. PHP interpreter has been
installed on it by default but I am not sure if PHP modules for MySQL
has been installed too or not. On MySQL download section there is only
some extensions which has links to PHP site. I couldn't use them really.
Can anybody help ? How can I install PHP modules for MySQL so I would
work with mysql database through PHP ?

Regards,

Danesh




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: On but off topic Putting a file in Ram

2004-11-23 Thread gunmuse
Actually no.  I have a file that is determined to be requested by mysql (Top
100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram


 The reason I ask this here.  Is I have graphics that are loaded by Mysql
 and
 was wondering if I can do the same for them since some of these sites can
 call my server 10-20,000 times a day for that same graphic.

I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.


Eamon Daly



- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram


I have a small file that calls a search function at Findwhat in case Mysql
 locally overloads.  I just put on a new partner who looks like they may
 call
 my server 40 million times a month.

 I know there is some way to put a file into Ram for super fast response.
 Question is how do I do this?

 Will it still write to Mysql from the Ram Drive?  What is the downside of
 doing this?

 The reason I ask this here.  Is I have graphics that are loaded by Mysql
 and
 was wondering if I can do the same for them since some of these sites can
 call my server 10-20,000 times a day for that same graphic.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: On but off topic Putting a file in Ram

2004-11-23 Thread gunmuse
Heap/Memory tables that is the phrase I couldn't remember.  The data is
stored in the file system.

I have one file that that is linked to via JavaScript to run a php file and
send an output.  That file accesses MySql  OR if I am overloaded it bypasses
my local system and goes directly to Findwhat.com to produce the search.  By
putting that file into memory I should be able to handle any load fairly
easily.

Any suggestions on where I should read to learn how to use heap/memory on
Linux/enterprise?

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 11:10 AM
To: [EMAIL PROTECTED]
Cc: Eamon Daly; [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram


Is the actual data stored in the database or somewhere in the file
system? If you do not have text or blob columns you may be able to use
heap/memory tables.

[EMAIL PROTECTED] wrote:

Actually no.  I have a file that is determined to be requested by mysql
(Top
100 site)  What I am wanting to do is put the images and or files into Ram
to serve them from there instead of the harddrive and conserve hd resources
for not known tasks.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Eamon Daly [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 23, 2004 9:17 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: On but off topic Putting a file in Ram




The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.



I assume you mean that you have image data stored in a MySQL
table somewhere and are using a SELECT to fetch and serve
it. I think the general consensus would be something along
the lines of Don't do that. Apache was /designed/ to serve
files quickly, so let it do what it does best. Store just
the filenames in MySQL and let Apache handle the rest. Once
you've done that, you can do plenty of things to speed up or
scale your system, such as mapping the files to memory with
mod_file_cache, judicious use of a caching proxy, or the
creation of a ramdisk.


Eamon Daly



- Original Message -
From: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, November 22, 2004 8:20 PM
Subject: On but off topic Putting a file in Ram




I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may
call
my server 40 million times a month.

I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?

Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?

The reason I ask this here.  Is I have graphics that are loaded by Mysql
and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]










-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How do you do the math

2004-11-22 Thread gunmuse



I see guys talk about 
doing the math to determine how many apache servers you need. How much ram 
you have or have left.

My question is how do 
you determine how much your MySql can handle?

I am running a dell 
dual 2.8 with 2 gig of ram Raid 5 

Whats the max 
connections MySql can handle? I have mine set in the 1000's now because 
any lower my server seems to choke.
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


On but off topic Putting a file in Ram

2004-11-22 Thread gunmuse
I have a small file that calls a search function at Findwhat in case Mysql
locally overloads.  I just put on a new partner who looks like they may call
my server 40 million times a month.

I know there is some way to put a file into Ram for super fast response.
Question is how do I do this?

Will it still write to Mysql from the Ram Drive?  What is the downside of
doing this?

The reason I ask this here.  Is I have graphics that are loaded by Mysql and
was wondering if I can do the same for them since some of these sites can
call my server 10-20,000 times a day for that same graphic.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Jonathan Duncan [mailto:[EMAIL PROTECTED]
Sent: Monday, November 22, 2004 5:34 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: MySQL Books


Sasha,

Plugs from authors are interesting, but plugs from readers are what
really sell a book.  I will check it out though.  Thank you for the
response.

Jonathan


Sasha Pachev [EMAIL PROTECTED] 11/19/04 5:36 pm 
Jonathan Duncan wrote:
I have the MySQL first edition book by Paul.  Still a great reference.
However, it being a bit outdated I was hoping to get a more current
book
and one with more examples, since I learn best by example.  The first
book has  good examples, but more would still help.

Therefore, I was comparing reviews online for the following two books:
-MySQL, Second Edition by Paul DuBois
-Mastering MySQL 4 by Ian Gilfillan

Any preferences between these two?  Any better suggestions for learning

MySQL front and back from a DBA perspective to an end user perspective?


Jonathan:

May I offer a shameless plug? MySQL Enterprise Solutions. Being the
first book
I've ever written, it does have its weaknesses, but also has its
strengths. For
every configuration variable in Chapter 14, and for every status
variable in
Chapter 15 I went to the source to make sure I understood what was going
on
behind the scenes before I wrote the description. It is also the only
book that
I know of so far that discusses MySQL internals (I am working on another
one
dedicated solely to MySQL Internals).

It was written in 2002, so it does focus on 3.23-4.0. However, this is
not that
big of a minus. Due to the strong commitment of the MySQL team to
backwards
compatibility, most if not almost everything the book says applies to
4.1 and
5.0. It is just that the newer versions have some new features and
options that
the book does not cover.


--
Sasha Pachev
Create online surveys at http://www.surveyz.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Thread concurrency

2004-11-22 Thread gunmuse



Just out of curiosity 
I recently turned this up from a 4 to a 16 to see what would choke. 
Nothing did. Matter of fact it seems like my load went down quite a 
bit.

Is there something 
wrong with leaving it this way?
Since it should be really simple tasks on 
Xeon processors shouldn't it be able to spawn even more than 16?

What was the thinking 
behind the 2 and 4 numbers in this arena?
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


RE: DBManager 3.0

2004-11-16 Thread gunmuse
Nice interface, but its crippleware.  Anything that may be of use is locked
out from testing exactly what they would tell me.  I tried to run reports.
Need to purchase.  Tried to diagram but not sure what that did for me other
than tell me what tables I already had.  could see some benefit for building
a complex db but not needed to test this software.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: COS [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 16, 2004 6:25 AM
To: PostgreSQL List; MySQL List
Subject: ANN: DBManager 3.0


Hi,

We are pleased to announce the awaited DBManager Professional 3.0.
This version is a new GUI with lots of new features. DBManager is a client
application to manage your databases for Windows environments.

Some of the features in this version are

- Manage all objects in the database server: MySQL 5, PostgreSQL 8, SQLite
3, Xbase, Interbase, Firebird, MSAccess(*), MSSQL Server(*), Sybase(*),
Oracle(*)
- New and improved design, Multi Document TAB Interface compatible with
Visual Studio .NET
- Create, build, import and export queries
- Create PHP and ASP(*) scripts for the web to any database flavour(*)
(PHP/native and ODBC/ASP)
- Import structure and data from a variety of source including: Text Files,
XML, MSAccess, MSExcel, Dbase III/Clipper/FoxPro, Paradox, ODBC, etc
- Export structure and data to MSAccess, MSExcel, Text Files
(CSV/Formatted), HTML, XML, SQL Dumps
- User and Privilege (*) Manager
- Task Builder (*) to automate processes
- Diagram Designer (*) to create and mantain your database structure
- New Table Designer
- New Query Editor
- Report and Form Builders (*) to create reports and Forms in HTML format
- Monitor (*) the Activities of the Server, Database or Table with a simple
click
- DBManager Console to type your commands
- Database Comparer (*) to keep your databases updated
- and much, much more.

(*) Available only in the Enterprise Edition

If you want to know more about DBManager or download the new version please
go to http://www.dbtools.com.br.

Best Regards,

Crercio O. Silva / DBTools Software


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: newbie: relationships between fields

2004-11-16 Thread gunmuse
NaviCat search for it on google it will make your life much easier.

Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183



-Original Message-
From: Amer Neely [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 16, 2004 1:43 PM
To: MySQL
Cc: Christian Kavanagh
Subject: Re: newbie: relationships between fields


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

| Dear list,
|
| I'm hoping to move to MySQL from Access, mostly so I can use my Mac to
build databases.  So far
I've been able to set up MySQL, connect to it, and create databases and
tables.  Getting to this
point has required a paradigm shift or two on my part (my first question
after it was installed was,
okay, how the hell do I open up the app and start work?).
|
| Now I'd like to create some relationships between the tables in my
database.  But I'm having some
trouble getting my head around how to do this - probably because I'm working
with an Access
paradigm.  Imagine I had two tables:
|
[snip]

Perhaps a good session with a relational database book would help you break
the 'Access' paradigm :)
I have had good success with 'Database Design For Mere Mortals', Michael J.
Hernandez, 0-201-69471-9.

'MySQL' by Paul DuBois will also give you a good start to RDBMS concepts and
design.
- --
/* All outgoing email scanned by AVG Antivirus */
Amer Neely, Softouch Information Services
Home of Spam Catcher  North Bay Information Technology Networking Group
W: www.softouch.on.ca
E: [EMAIL PROTECTED]
Perl | PHP | MySQL | CGI programming for all data entry forms.
We make web sites work!
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (MingW32)
Comment: For info see http://www.gnupg.org

iEYEARECAAYFAkGaZlAACgkQ3RxspxLYVsWIlACgnWa+wSt1xO8QTws3cldjsI+3
suQAn0i5mmNVOMCBvY2bB4arjZQNKYVs
=IyI1
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.1.8

2004-11-15 Thread gunmuse
Does 4.1.8 address any of the issues I am reading about in 4.1.7?  Are all
of these issues valid or are you finding that its Lazy coding?



Thanks
Donny Lairson
President
http://www.gunmuse.com
469 228 2183


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UTF-8 compliance

2004-11-09 Thread gunmuse



What does this all 
entail? Can we use Under_Scores in table names.
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


XML to Mysql

2004-11-08 Thread gunmuse



This may just sound 
stupid but I am going to ask anyway.

We run a search 
engine and we bring in 3+ XML feeds from other search engines Via perl and 
PHP.

So we can end up with 
300 results listed for EACH SEARCH. They are only valid for that ONE 
SEARCH but we need to track every click for proper payment.

As you may imagine 
this has a huge overhead to our mysql. We are growing and about to rewrite 
our search engine and would like ideas on how to make the best use of these 
temporary storage tables. Some of the tracking urls can be 440+ characters 
long so we can't use varchar.

Also If you have 20 
searches per minute coming in and 30 seconds between the clicks the table can 
get quick big quickly. So jumping directly to right stored link 
immediately if not quicker is a must. 

Just spitballing for 
the plan ahead. Wanting to make sure we use all the available speed 
assests at our disposal. 
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183 


Temporary Upgrade for Cpanel?

2004-11-07 Thread gunmuse



I read on the 
Cpanel.net forum that you can start the 4.0.22 mysql with a --new switch to use 
it as 4.1.7 so you can see if its compatible with my 
software.

Is this true and 
exactly how?
ThanksDonny LairsonPresidenthttp://www.gunmuse.com469 228 2183