mysql replication

2012-08-28 Thread aaron zhang
Hi all

i use mysql database,when i use mysql replication ,the slave host do not
replication,i check the error message,i found error message,it is 'row is
too large', i do not understand why,please tell and help me ,thanks


explain shows type = ALL for indexed column

2010-12-07 Thread Aaron Turner
Basically, I'm doing a:

 select fields FROM Database1.table1 WHERE indexed_field IN (Select
field from Database2.table2, );

It's taking about 40sec to execute where table1 (InnoDB) only has
about 33k records and my subselect is returning about 600 records.
Explain shows that it's doing a type=ALL against table1, even though
the indexed_field is an indexed varchar(64).   I've verified the
subselect executes in under 1 second so I know it's not the problem.

I'm guessing that MySQL just can't keep everything in memory at once
to use the index since the indexed_field is relatively large.
Normally, I'd compare against an integer primary key, but that's not
possible and I can't modify the schema to make it possible.

I've been reading the my.cnf documentation and various tuning
articles, but it's not clear what variables I should tweak to solve
this specific issue.  Server is a dual-quad core w/ 4GB of RAM,
although it's not dedicated to MySQL (webserver and some other
database centric background jobs run).   Table1 however is on a
dedicated RAID1 disk pair and is getting regular inserts/deletes (it's
a log table).

Any advice would be appreciated!

-- 
Aaron Turner
http://synfin.net/         Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix  Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
    -- Benjamin Franklin
carpe diem quam minimum credula postero

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: explain shows type = ALL for indexed column

2010-12-07 Thread Aaron Turner
Thanks Gavin.  Rewriting the query to not use the subselect solved the problem!

On Tue, Dec 7, 2010 at 11:33 AM, Gavin Towey gto...@ffn.com wrote:
 Mysql often handles subqueries poorly.  It's best to rewrite that as a JOIN 
 instead:

 http://dev.mysql.com/doc/refman/5.1/en/rewriting-subqueries.html

 If you have further questions after doing that, show the table structures, 
 the query, and the explain output.


-- 
Aaron Turner
http://synfin.net/         Twitter: @synfinatic
http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix  Windows
Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety.
    -- Benjamin Franklin
carpe diem quam minimum credula postero

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



substring query

2010-06-10 Thread Aaron Savage
I am looking for some guidance on creating a substring query.  I have
a column that stores a path to a file.  I would like to extract that
file extension and that is it and display it on my results.  However,
the paths are different lengths and some extensions are 3 letter and
some are 4, eq 'html'.  The only common they all have is the period
before the extension.  Anyone created a nested substring query that
can do what I am looking to do?

Thanks,
Aaron

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: substring query

2010-06-10 Thread Aaron Savage
Thanks Guys for you insights.

It may be a little more complicated then I made it out to be.

I have tried this select substring_index(myfiled,'.',-2) from mytable.
 This has gotten me to a good starting point.

But I still have two problems.  After the extension there is a space
and more wording.  I want to cut that off.  Also, some paths do not
have an extension and I am trying to ignore those.  So simply.  I am
just trying to pull out the file extension but there were some
conditions I did not list.

-Aaron


 SUBSTRING_INDEX should do what you want.

 SELECT SUBSTRING_INDEX('myfile.path','.',-1)
 = 'path'

 SELECT SUBSTRING_INDEX('myfile.pth','.',-1)
 = 'pth'

 or, in a version that's closer to real life usage:

 SELECT SUBSTRING_INDEX(myfield,'.',-1) from mytable

 http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_substring-index

 Mark
 --
 http://mark.goodge.co.uk



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: substring query

2010-06-10 Thread Aaron Savage
Sorry Jay,

Here is what I came up with.

select substring(substring_index(myfile,'.',-2),1,4) AS MyColumn from
mydatabase group by  MyColumn;

That appears to yield what I need.  I just need to filter out the
results that do not have an extension.

-Aaron


On Thu, Jun 10, 2010 at 9:28 AM, Jay Blanchard jblanch...@pocket.com wrote:

 Thank you for that update, would have been good to have from the start.

 SELECT SUBSTRING_INDEX('my.doc','.',-1)



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication switch Master to slave and back

2009-07-08 Thread Aaron Blew
The simplest method may be to set them up in a multi-master configuration,
similar to what's documented here:
http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html

This way you won't have to worry about switching between two masters in a
failover scenario since they're both writable.

-Aaron

On Wed, Jul 8, 2009 at 1:38 PM, Cantwell, Bryan bcantw...@firescope.comwrote:

 I have successfully set up a replication master/slave scenario with my
 MySQL 5.0.51a
 Now in the event of a fail over to the slave (an identical appliance), I
 want the old master to become the slave for an eventual switch back the
 other way.
 Would it really take the same process to keep the old master up to date
 with the new one? Or is there a way to bring it up to date with the new
 machine without a mysqldump or copying data files?

 I have binary logging running on both machines in hopes that I could just
 tell the new slave how to catch up with the new master...
 Any assistance here?

 thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Ordering an IN query

2009-06-05 Thread Aaron Fischer

I have a query that I build dynamically, here is an example:

select from (table1 as t1 left join table2 as t2 on t1.id = t2.id)  
left join table3 as t3 on t1.id = t3.id where t1.id in ('221593',  
'221591', 'CC3762', 'CC0059')


So I build the query in the order that I want it displayed.  That is  
display 221593, then 221591, then CC3762, etc.  However, when the  
query is executed it looks like it automatically sorts the codes in  
ascending order, so I get 221591, then 221593, the CC0059, etc.


I want the results displayed in the order that I build the query.  Is  
there some way of doing that?


Thanks,

-Aaron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best RAID for a DB + LVM?

2009-02-24 Thread Aaron Blew
SCSI/FC/SAS drives are absolutely faster, especially at 15kRPM.  Your
requirement of IOPS vs Usable space may actually make it less expensive to
use FC drives (ex. if you don't have to retain much data but need it to be
really fast _all_ the time).  This can be especially true if you take
datacenter costs into consideration.

There's also a difference between Enterprise SATA drives and regular
Desktop SATA drives.  The Enterprise class drives tend to be optimized for
workloads that a database may throw at them.

One thing to keep in mind if your dataset isn't terribly large would be to
cram as much RAM in the host as you can.  If you've only got a portion of
your data that's heavily accessed, keeping it in RAM would be ideal.

-Aaron


On Mon, Feb 23, 2009 at 9:17 AM, Brent Baisley brentt...@gmail.com wrote:

 SCSI isn't necessarily faster now. The big difference used to be
 SCSI's support for command queueing, which is why it was faster in
 multi-user environments. Command queueing is now fairly common in SATA
 drives.
 The highest end SCSI is probably still faster than the highest end
 SATA, but you will have less disk space and it will cost much more.
 I would recommend using one of the RAID in a box solution. They have
 big caches for the whole RAID and they are optimized to the
 controllers. If money isn't really an issue, you may look into
 something like NetApp. That would have everything you need.

 Brent Baisley

 On Mon, Feb 23, 2009 at 6:58 AM, Andy Smith a.sm...@ukgrid.net wrote:
  What RAID level to use, whether to use SCSI or SATA etc are all pretty
 much
  how long is a piece of string? questions. If you have a really high end
  hardware array RAID 5 may be faster than RAID1+0 is on a cheaper system.
 
  Basically
 
  RAID 5 = slower
  SATA = slower
  RAID 1+0 = faster
  SCSI = faster
  more physical disks = faster
  more expensive controller = faster
 
  ;)
 
  If you want to compare specific hardware you'll need to get your hands on
 it
  or find someone else who has already done a comparison. But it will make
 a
  huge difference to performance what disk array you have hooked up, just
  depends how much you want to spend
 
  Quoting Waynn Lue waynn...@gmail.com:
 
  I currently have a RAID 5 setup for our database server.  Our space is
  running out, so I'm looking to increase the disk space.  Since I'm doing
  that anyway, I decided to re-evaluate our current disk array.  I was
 told
  that RAID 5 isn't a good choice for databases since it's slower to
 write.
  In addition, I've also been considering setting up LVM to take quick db
  snapshots, after reading various links on the web (and posts to this
  list).
 
  So on to the questions!  First, if that's what I eventually want to do
  (get
  a new RAID server with LVM), do I need to do anything special to set up
  LVM
  on the new system?  Second, what is a good RAID setup for databases?
  RAID
  10?  0+1?  Third, I have the choice of using SATA or SCSI in conjuction
  with
  the RAID drives I choose.  How much of a difference is there in using
 SATA
  instead of SCSI, especially in light of whatever RAID I end up going
 with?
 
  Thanks for any insights,
  Waynn
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=brentt...@gmail.com
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Re: SPARC to x64 Transition

2009-02-18 Thread Aaron Blew
This is confirmed working on a small test set with MySQL 4.1.22
transitioning from SPARC to X64.  Thanks everyone!

-Aaron

Here's the test set we used:

CREATE TABLE `test_innodb` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `bigint` bigint(11) DEFAULT NULL,
  `float` float DEFAULT NULL,
  `double` double DEFAULT NULL,
  `deci` decimal(6,2) DEFAULT NULL,
  `var` varchar(255) DEFAULT NULL,
  `datetime` datetime DEFAULT NULL,
  `timestamp` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,

  big_unsigned bigint unsigned,
  float_unsigned float unsigned,
  int_unsigned  int unsigned,


  PRIMARY KEY (`id`),
  KEY `deci` (`deci`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;


insert into `test_innodb` values
 ('1' ,'9223372036854775807' ,'321.123' ,'123132321.213213' ,'12.20'
,'somehcar' ,'2009-02-10 10:44:10' ,'2009-02-17 10:46:05' ,(400 *
400) ,444.888 ,123456) ,
 ('2' ,'-9223372036854775807' ,'-67498.7' ,'-6.84616419684968e+17' ,'-0.84'
,'somevarchar' ,'2009-02-05 10:45:12' ,'2009-02-17 10:46:12' ,(500 *
300) ,444.888 ,123456) ,
 ('3' ,'0' ,'0.0' ,'0.0' ,'0.0' ,NULL ,'-00-00 00:00:00' ,'2009-02-17
10:46:12' ,(0) ,0.0 ,0);



On Mon, Feb 16, 2009 at 8:26 AM, Heikki Tuuri heikki.tu...@oracle.comwrote:

 Aaron,

 
 I've currently got a project to migrate a LARGE (3.5TB) MySQL data set from
 a Sun SPARC machine to a Sun x86 machine, both running Solaris 10 (though
 obviously one is x86 and the other is SPARC).  Is it possible to simply
 copy
 the data files from one host to the other or is a full mysqldump/import
 necessary to preserve data integrity?

 If a file copy doesn't work, why specificially would it fail?

 Thanks,
 -Aaron
 

 you can simply copy the files, whether InnoDB or MyISAM. As far as I know,
 all modern processors use the same floating point format. And all integer
 and other data structures are platform independent in MyISAM and InnoDB.

 Best regards,

 Heikki
 Innobase/Oracle

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




SPARC to x64 Transition

2009-02-13 Thread Aaron Blew
All,
I've currently got a project to migrate a LARGE (3.5TB) MySQL data set from
a Sun SPARC machine to a Sun x86 machine, both running Solaris 10 (though
obviously one is x86 and the other is SPARC).  Is it possible to simply copy
the data files from one host to the other or is a full mysqldump/import
necessary to preserve data integrity?

If a file copy doesn't work, why specificially would it fail?

Thanks,
-Aaron


Re: How do you backup HUGE tables?

2009-01-23 Thread Aaron Blew
I know how you feel!  I think your two best options are these:
1.) Use LVM snapshots per the MPB links you mentioned as a guide.  Your
incremental backup would be the binary logs that MySQL writes.  You could
copy any of this data off site by mounting the snapshots and using your
remote copy mechanizm of choice.
2.) Create a slave and create your backups from it, again using the binary
logs as the incremental backup.  You could also use the snapshot technique
to create the initial data set for this host if you're not able to take this
host down for an extended period of time.

Claudio has an excellent point with innodb-file-per-table as well.  Just
make sure a single table will never grow to more than the maximum file size
of your filesystem.

Good luck,
-Aaron

On Fri, Jan 23, 2009 at 1:18 PM, Daevid Vincent dae...@daevid.com wrote:

 We have some INNODB tables that are over 500,000,000 rows. These
 obviously make for some mighty big file sizes:

 -rw-rw 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1

 This can take a good amount of time to copy even just the file, and a
 mysqldump can take hours to export and import took 3 days (with the
 usual intelligent optimizations, locks, inserts, keys, etc.). Yowza!

 So, it's not really feasible or ideal to do this daily. We'd like to do
 some kind of daily diff and weekly or monthly full backup. Also, as
 any good backup strategy involves off site backups, it's not very
 convenient or even feasible to transfer 80+ GB over our pipe ever day
 (which according to iperf is getting around 11 MBytes/sec from our data
 center to our office).

 Looking for ideas as to what others are doing to backup their very large
 sets?

 We do have a master/slave setup. We're thinking of adding two more
 slaves that are read-only and not accessed via the web at all. Just sits
 there being a backup effectively. One being offsite in another building
 and the logic that we'll trickle in maybe 100k per minute as the data
 is inserted into the real M/S so that should be negligible on our
 intra/internet.


 ---
 I've done some research here, but nothing stands out as the winner...
 but I'm open to any of these ideas if you can make a strong case for
 them.

 http://dev.mysql.com/doc/refman/5.1/en/mysqlhotcopy.html
 mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables.

 http://dev.mysql.com/doc/refman/5.0/en/innodb-backup.html
 InnoDB Hot Backup is an online backup tool you can use to backup your
 InnoDB database while it is running. InnoDB Hot Backup does not require
 you to shut down your database and it does not set any locks or disturb
 your normal database processing. InnoDB Hot Backup is a non-free
 (commercial) add-on tool with an annual license fee per computer on
 which the MySQL server is run.
 http://www.innodb.com/hot-backup/
 [not loving that it's a commercial tool]


 http://dev.mysql.com/doc/refman/5.0/en/replication-implementation-details.html

 http://dev.mysql.com/doc/refman/5.1/en/backup.html
 read the comments You can also take very fast online or hot backups if
 you have linux volume management or LVM ... I knew there was a way to
 use LVM for backups somehow. Maybe a solution for us?
 http://www.mysqlperformanceblog.com/?s=backup

 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

 Some other backup solutions maybe:
 http://www.debianhelp.co.uk/mysqlscript.htm
 http://mysqlbackup.phpmagazine.net/ (seems dead, but maybe it just works
 so well, no need to improve it?)
 http://www.ozerov.de/bigdump.php




Re: How much memory can mysql 5.1 take advantage of?

2009-01-22 Thread Aaron Blew
This doesn't work the same way as system RAM though.  You can't extend your
innodb buffer pool onto a block device or filesystem.  Though this
technology would be good for segregating things like InnoDB logs and mysql
binary logs.

-Aaron

On Thu, Jan 22, 2009 at 7:40 AM, mos mo...@fastmail.fm wrote:

 At 01:08 PM 1/20/2009, you wrote:

 While specing out a new server, I was wondering if there is any limit to
 how much memory can be allocated to mysql 5.1.  If a server has 16GB of ram,
 can mysql take advantage of that much ram (minus a reserved amount for the
 OS obviously)?  Is there any limit such as those imposed by 32-bit
 processors?

 Thanks!


 You can get an external ram drive fairly cheap that will handle 32gb per
 drive (4gb sticks) and can use raid 0 to give you larger drives. They cost
 around $500 for the hardware not including the RAM. On page 9 of the report
 http://techreport.com/articles.x/16255/9 they show database and web server
 performance that blows away all hard drives by a factor of 10x. They have a
 small battery backup that will preserve the contents between reboots and for
 a few hours during power loss. An option allows you to back up the contents
 to flash storage. Of course you should use a UPS on it. What appeals to me
 is you can use it 24/7 and it is not going to wear out. This is definitely
 going on my wish list. :-)

 http://techreport.com/articles.x/16255/1

 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Aaron Blew
I'm also having a similar issue with some tables I've been trying to dump
(total data set is around 3TB).  I'm dumping directly from one host to
another (mysqldump -hSOURCE DATABASE | mysql -hLOCALHOST DATABASE) using
mysql 4.1.22.  One system is Solaris 10 SPARC, while the other is Solaris 10
x64 (64bit MySQL as well).

I wrote a script that starts a mysqldump process for each table within a
database, which shouldn't be a problem since the host currently has around
12G unused memory.  Midway through the dump I seem to lose the connection as
Dan described.  After attempting to drop/re-import (using a single process),
the larger tables continue to fail (though at different points) while some
of the small-medium sized tables made it across.

Anyone else run into this before? Ideas?

Thanks,
-Aaron


Obsoleting old ibdata files

2008-10-13 Thread Aaron Blew
I've run into the following scenario: we've got several database servers
with 2+TB of data spread across several 900GB InnoDB data files.  We can
move to innodb_file_per_table in an incremental fashion (ie ALTER TABLE
ENGINE='InnoDB' one table at a time), however, to complete this task, we'll
need to phase out the existing ibdata files and delete them.  Is there a way
to safely delete ibdata files after the data has been removed from them
without doing a mysqldump and re-import?  Unfortunately we can't take the
databases down for the multiple days it would take to do a full mysqldump
based migration.

Thanks,
-Aaron


Re: MySQL and SAN storage - a request for comments

2008-09-26 Thread Aaron Blew
Hi Michael,
Overall and most of the time, SANs are a good thing.  They have several
advantaged over dedicated directly attached storage arrays:
1.) They're generally a lot smarter about how and when they write and read
to the disks.  Often they understand what's going on down at the head level,
and can leverage that for better performance.
2.) They've generally got a lot more cache than a directly attached array
(some systems can have up to 256GB of cache)
3.) They're a lot more reliable than many direct attached arrays.  There
have been many many hours put into algorithms to detect and predict disk
failures by these SAN vendors, and they're designed to keep that data online
as much as possible as their reputation rides on their availabity.  Hitachi
Data Systems (as one example) even offers configurations with a 100% data
availability guarantee (so long as the unit has power)
4.) Having all those spindles under one management/virtualization framework
makes you a lot more agile with how you can make use of your storage.  The
MySQL workloads your environment has may not all be striped across all the
spindles within the SANs, segregating the workloads.  However, using all the
spindles available can have advantages in some workloads as well, since not
all databases will be hammering down to the spindle all the time.

A SAN environment isn't always a trivial thing to operate, but it will save
a lot of time over managing 100s of direct attached arrays and can offer
performance capabilities way beyond what can be practically achieved by
using direct attached storage.

-Aaron


On Thu, Sep 25, 2008 at 6:38 PM, Michael Dykman [EMAIL PROTECTED] wrote:

 Hello all,

 I recent started employment with a company which has a lot of mysql
 servers (100+ is my best estimate so far) and have all of their
 database servers, masters and slaves  alike, using one of 2 SANs for
 data storage.  They servers are connected to dedicated switches with
 fibre to to SANs and the SANs themselves seem to be well configured
 and tuned.

 However, it seems preposterous to me that all those very busy
 databases should, by design, have a common bottleneck and share a
 single point of failure. I am not deeply knowledgeable about SANs or
 their performance characteristics; my reaction thus far is pretty much
 intuition but I help can't but picture the simple analogue of single
 disk or a RAID 10 with synchronized spindles frantically thrashing
 back and forth to respond to tens of thousands of queries per second.

 Would anyone care to comment?  Is my concern justified or am I merely
 confused?

 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: INDEXING ALL COLUMNS

2008-09-05 Thread Aaron Blew
We'd need more information on what the where clauses of the queries
look like to assist with this.

-Aaron

On 9/5/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:
 Hi,

 What would you say about the below table . What can i do to make it more
 efficient.

 CREATE TABLE mailer_student_status (
   student_id decimal(22,0) NOT NULL default '0',
   param varchar(128) NOT NULL default '',
   value varchar(128) default NULL,
   PRIMARY KEY  (student_id,param).
  KEY idx_value (value)
 )

 On Sat, Sep 6, 2008 at 1:16 AM, ewen fortune [EMAIL PROTECTED] wrote:

 Hi,

 Following on from what Mike mentioned, indexing all columns does not
 really help as MySQL will at most use one index for a query, so its
 important to pick your indexes carefully and consider constructing
 composite indexes. An index on a single column may not even be used
 due to poor cardinality.

 Ewen

 On Fri, Sep 5, 2008 at 9:27 PM, Krishna Chandra Prajapati
 [EMAIL PROTECTED] wrote:
  Hi all,
 
  I am looking for, is there any specific reason for not indexing all
 columns
  of a table. whats the impact on the performance. Although indexing is
 meant
  for getting great performance. So, why indexing all columns is not
  feasible.  (Read in docs that  all columns should not be indexed)
 
  --
  Krishna Chandra Prajapati
 




 --
 Krishna Chandra Prajapati


-- 
Sent from my mobile device

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Aaron Blew
Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4.  You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the key_buffer
alocation and put it towards the innodb buffer pool

What are the system's specs?  What's it's underlying storage?  What flags
were used when you created the filesystem(s)?  What OS/Version of MySQL are
you running?  Could you send us some iostat output?

Thanks and good luck,
-Aaron

On Thu, Sep 4, 2008 at 1:26 PM, Josh Miller [EMAIL PROTECTED]wrote:

 Good afternoon,

 I have recently converted a large table from MyISAM to InnoDB and am
 experiencing severe performance issues because of it.  HTTP response times
 have gone from avg .25 seconds to avg 2-3 seconds.  Details follow:

 PHP/MySQL website, no memcached, 3 web nodes that interact with DB, one
 that serves images, one master DB that serves all reads/writes, backup DB
 that only serves for backup/failover at this time (app being changed to
 split reads/writes, not yet).

 The one table that I converted is 130M rows, around 10GB data MyISAM to
 22GB InnoDB.  There are around 110 tables on the DB total.


 My.cnf abbreviated settings:

 [mysqld]
 port  = 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer= 3G
 sort_buffer_size  = 45M
 max_allowed_packet  = 16M
 table_cache = 2048

 tmp_table_size= 512M
 max_heap_table_size = 512M

 myisam_sort_buffer_size = 512M
 myisam_max_sort_file_size = 10G
 myisam_repair_threads   = 1
 thread_cache_size   = 300

 query_cache_type  = 1
 query_cache_limit = 1M
 query_cache_size  = 600M

 thread_concurrency  = 8
 max_connections   = 2048
 sync_binlog = 1

 innodb_buffer_pool_size = 14G
 innodb_log_file_size  = 20M
 innodb_flush_log_at_trx_commit=1
 innodb_flush_method = O_DIRECT
 skip-innodb-doublewrite
 innodb_support_xa = 1
 innodb_autoextend_increment = 16
 innodb_data_file_path   = ibdata1:40G:autoextend

 We're seeing a significantly higher percentage of IO wait on the system,
  averaging 20% now with the majority of that being user IO.  The system is
 not swapping at all.

 Any ideas for what to check or modify to increase the performance here and
 let MyISAM and InnoDB play better together?  The plan is to convert all
 tables to InnoDB which does not seem like a great idea at this point, we're
 considering moving back to MyISAM.

 Thanks!
 Josh Miller, RHCE

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Automatic email to database member on X date?

2008-07-04 Thread Aaron Norlund


Hello all,

I have no experience with MySQL, but am reading through the available texts. 
However, I feel what I need to do is probably way into these, so perhaps 
someone could give me a hand.

I have taken over administering a website with a user database made with MySQL. 
It keeps track of members of our owner's association. I would like to set up a 
script that will automatically email member's when their membership expiration 
is nearing.

Will someone please explain how this can be be accomplished, or point me  in a 
direction to find out?

I'm not sure if it helps, but we 'manage' the database through phpMyadmin.

Thank you for your help!

Aaron N.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problem with more than 500 concurrent queries

2008-06-26 Thread Aaron Blew
Generally your error log will be HOSTNAME.err in your data_dir.

-Aaron

On Thu, Jun 26, 2008 at 8:46 AM,  [EMAIL PROTECTED] wrote:
 Sorry about the long signature in the email. I forgot to remove it...

 Guillermo






 From:
 [EMAIL PROTECTED]
 To:
 mysql@lists.mysql.com
 Date:
 26.06.2008 17:39
 Subject:
 Re: Performance problem with more than 500 concurrent queries



 Hello,

 thanks for the answer.

 Where is the error.log stored? I run the mysqladmin, it requires the
 password and it exits immediately. But I cannot find any error.log.

 Thanks,

 Guillermo


 ---
 Guillermo Acilu
 Senior Engineer, Koiaka GmbH

 Koiaka GmbH
 Riesserkopfstr. 17
 82467 Garmisch-Partenkirchen
 Tel: +49 (0)8821 9679555
 Fax: +49 (0)8821 730 9185
 Mailto:[EMAIL PROTECTED]
 http://www.koiaka.com

 Amtsgericht München: HR B 161 041
 Geschäftsführer: Guillermo Acilu
 Sitz: Garmisch-Partenkirchen

 Diese Email kann vertrauliche und/oder rechtlich geschützte Informationen
 enthalten. Wenn Sie nicht der richtige Adressat sind oder diese Email
 irrtümlich erhalten haben, dürfen Sie diese weder benutzen, kopieren,
 weiterleiten oder irgend eine Maßnahme einleiten, die im Zusammenhang mit
 dem Inhalt dieser Email steht. Informieren Sie bitte sofort den Absender
 und vernichten Sie die irrtümlich erhaltene Email vollständig.
 Vielen Dank!

 This e-mail message may contain confidential and/or privileged
 information. If you are not an addressee or otherwise authorized to
 receive this message, you should not use, copy, disclose or take any
 action based on this e-mail or any information contained in the message.
 If you have received this material in error, please advise the sender
 immediately by reply e-mail and delete this message completely.
 Thank you!





 From:
 Ananda Kumar [EMAIL PROTECTED]
 To:
 [EMAIL PROTECTED] [EMAIL PROTECTED]
 Cc:
 mysql@lists.mysql.com
 Date:
 26.06.2008 16:30
 Subject:
 Re: Performance problem with more than 500 concurrent queries



 do this

 mysqladmin -uroot -p debug

 and check the error.log, see if there are any locks on the tables.


 On 6/26/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Hello guys,

 I am new to this list and also kind of new to mysql too.

 I have a multi-thread application written in Ruby. The application is
 reading one table that has two columns (Father, Children). As you might
 suspect, this is a tree. The fields are foreign keys to a second table,
 but the second table is not involved in the problem. The father-children
 table has around 10 rows.

 What the code does is simple. It starts in the root and it navigates the
 tree up to the leafs. The first thread takes the root and runs a select
 to
 get all the children. Then it triggers new threads per children and it
 ends, leaving the other threads alive. Every thread does exactly the
 same
 until they reach the leafs.

 When the threads reach the leafs, they read the description from the
 other
 table using the leaf code, write the value in a global array and end.

 With a few rows, the algorithm is very fast. The problem starts when
 each
 node has many children. To give you an idea, in one point in time there
 are more than 600 threads running, but for some reason I always see no
 more than two queries running in parallel from the MySQL Administrator.

 Each thread opens a new connection, runs the select, closes the
 connection
 and ends. I have the default maximum connections, 100. So I should see
 more queries in parallel than only two or three. All the connections are
 constantly used while the algorithm runs. So the connections are open,
 but
 the database is not doing anything with them. It sounds like the table
 is
 locked, or something. I have checked the code several times but
 everything
 is correct. The code is only 25 lines long.

 The other symptom I can see is that when I start the script, there are
 up
 to 30 or 40 queries in parallel, but then the number goes down quickly
 until it reaches only 2 or 3 concurrent queries a few seconds later. And
 it stays like this.

 I've started playing around with the caches and memory values for MySQL
 server, but to be honest, I am just guessing and the performance does
 not
 change. I am Oracle DBA and I am trying to find some points in common
 with
 mysql to gain performance, but I cannot find the source of the problem.

 I am with Mac OS X Leopard in a very fast machine and MySQL 5.0.51b. The
 problem is also present in 5.1.25-rc

 Any ideas why is this happening?

 Thanks,

 Guillermo





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Moving Database from Windows to Linux

2008-06-26 Thread Aaron Blew
I would suggest running something like this from your new linux server:
mysqldump -u USERNAME -p -h WINDOWS_HOST --opt DATABASE | mysql -u
root -h localhost DATABASE

This will pump the ouptut directly from mysqldump into mysql running
on the linux machine, so you don't need another transfer mechanism
(FTP, scp, etc.).

-Aaron

On Thu, Jun 26, 2008 at 5:09 PM,  [EMAIL PROTECTED] wrote:
 See Thread at: http://www.techienuggets.com/Detail?tx=42035 Posted on behalf 
 of a User

 Hi,

 I'm running MySQL 5.0.51 Community edition. I have a half a gigabyte sized 
 database that I'd like to move from a Windows 2000 based server to a Linux 
 server. What's the best way to do this? I normally take complete backup using 
 MySQL administrator which creates a file with sql to recreate the tables and 
 insert the data. Can I simply ftp this to the Linux box and restore it to a 
 database there? Is there a MySQL administrator for Linux?

 Thanks



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB File Fragmentation

2008-06-20 Thread Aaron Blew
I have a question about how InnoDB deals with fragmentation within
it's data files.  Let me describe my usage scenario to you:

1.) Records are inserted into a InnoDB table.  We'll call this table
A.  It contains several different kinds of columns including
VARCHARs.
2.) Records are then processed by a process running on another server.
 The processed information is then stored in table B (this table
also has VARCHARs), and the row that was processed it DELETEed from
table A.

This happens tens of times per second.Over time, additional InnoDB
data files have been added because of data growth.

My questions are these:
* How does InnoDB store VARCHAR information?  Is it based on the
column max length?
* How does InnoDB decide to re-use free blocks within the data files?
Are rows prone to fragment?

Thanks,
-Aaron

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



create view not working

2008-06-17 Thread Aaron Ryden

why doesn't this work? the select works perfectly

create view cost_report as
SELECT c_name, d_manuf as Manufacturer, d_model as Model, count(1) as 
Number Enrolled,

d_price as Monthly Price, count(1)*d_price as Cost
FROM `b_devices` A
left join b_device_types B
on A.d_id = B.d_id
left join b_prices C
on A.d_id = C.d_id AND A.c_id = C.c_id
left join b_company D
on A.c_id = D.c_id
group by A.d_id, A.c_id
order by c_name


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Block size of filesystem

2008-05-09 Thread Aaron Blew
Will you be using the MyISAM or InnoDB table engines?

I had heard that InnoDB uses 16k blocks internally, so that might be a good
starting point, though I'd love to have someone confirm or deny that this is
actually true.

-Aaron


On Fri, May 9, 2008 at 12:01 AM, Iñigo Medina García 
[EMAIL PROTECTED] wrote:

 Hi friends,

 we're developing a new web application that works with a db around 1gb
 and 30 tables.

 We work with linux, and I'm evaluating the benefit of making an specific
 partition (ext3) for mysql in order to have a bigger Block Size. Block
 Size by default in / partition is 4096.

 Do you think that could be good? Any experience?

 Iñigo

 --
 
 Iñigo Medina García
 Librería Díaz de Santos
 [EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Block size of filesystem

2008-05-09 Thread Aaron Blew
That's true in some workloads, but with InnoDB you'll usually run into data
file fragmentation before filesystem fragmentation (unless it's a shared
system).  This is especially true if your application runs a lot of updates
or deletes as random chunks of space will become free at different points
within the InnoDB data file.  In a business intelligence application where
there's not much deleting this probably isn't a concern...

I think the best way to approach it may be to look at your average row size
and base your InnoDB and filesystem block sizes around that.

-Aaron

On Fri, May 9, 2008 at 12:43 AM, Ben Clewett [EMAIL PROTECTED] wrote:



 I would use as large a block size as you dare, especially with InnoDB.
 Makes reading and writing faster as custs down seek time as cuts down disk
 fragmenation and avoids block table reads.  With MyIsam you have lots of
 files, but if you only have a few again might work well with a large block
 size.

 Also have a look at the stripe size of your raid system, might work well
 aligning them if you can.  This URL also gives some tips for an ext3 file
 system on RAID, look for 'stride':

 http://insights.oetiker.ch/linux/raidoptimization.html

 Ben

 Iñigo Medina García wrote:

 Hi friends,

 we're developing a new web application that works with a db around 1gb
 and 30 tables.

 We work with linux, and I'm evaluating the benefit of making an specific
 partition (ext3) for mysql in order to have a bigger Block Size. Block
 Size by default in / partition is 4096.

 Do you think that could be good? Any experience?

 Iñigo


 *
 This e-mail is confidential and may be legally privileged. It is intended
 solely for the use of the individual(s) to whom it is addressed. Any
 content in this message is not necessarily a view or statement from Road
 Tech Computer Systems Limited but is that of the individual sender. If
 you are not the intended recipient, be advised that you have received
 this e-mail in error and that any use, dissemination, forwarding,
 printing, or copying of this e-mail is strictly prohibited. We use
 reasonable endeavours to virus scan all e-mails leaving the company but
 no warranty is given that this e-mail and any attachments are virus free.
 You should undertake your own virus checking. The right to monitor e-mail
 communications through our networks is reserved by us

  Road Tech Computer Systems Ltd. Shenley Hall, Rectory Lane, Shenley,
  Radlett, Hertfordshire, WD7 9AN. - VAT Registration No GB 449 3582 17
  Registered in England No: 02017435, Registered Address: Charter Court,
  Midland Road, Hemel Hempstead,  Hertfordshire, HP2 5GE.
 *


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Comparing keys in two tables

2007-10-26 Thread Aaron Fischer

Greetings!

I have a problem that it seems would best be solved using  
subqueries.  However, I am working on a server that is running MySQL  
3.23.58, so subqueries are not available to me.


What I am trying to do:
I have two tables, each containing approximately 37,000 records.  I  
want to compare the index field in Table A to the index field in  
Table B.  I would like to see all records where the index in Table A  
does not exist in Table B.


Thoughts?  Suggestions?

Much appreciated!

-Aaron


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Comparing keys in two tables

2007-10-26 Thread Aaron Fischer

Thanks Peter and Baron, these both worked well.

The left join on took .1919 seconds and the left outer join as  
took .1780 seconds.


=)


On Oct 26, 2007, at 11:37 AM, Peter Brawley wrote:


Aaron

An exclusion join:

SELECT a.col
FROM a
LEFT JOIN b ON a.col=b.col
WHERE b.col IS NULL;

PB




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqld fails to start with error unknown option '--enable-named-pipe'

2007-08-27 Thread Aaron Stromas
 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic 
-fasynchronous-unwind-tables' 'CXX=gcc' 'LDFLAGS='


--
Aaron Stromas  | Tik-tik-tik!!!... ja, Pantani is weg...
mailto:[EMAIL PROTECTED]  |BRTN commentator
+972 (0)54-6969581 |L'Alpe d'Huez
http://www.izoard.com  |1995 Tour de France



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query Question

2007-04-16 Thread Aaron Clausen

I have a couple of very simple tables to handle a client signin site:

The client table has the following fields:
 client_id int(11) primary key auto_increment
 first_name char(90)
 last_name char(90)

The signin table has the following fields
 record_id int primary key auto_increment
 client_id int
 date datetime

Essentially, the client enters his id and it creates a record in the
signin table.

I need a query that can identify all the clients who signed in for the
first time during a specific month.  I have fought this one for a
couple of days now and just can't seem to get it.

--
Aaron Clausen   [EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ENCODE() and DECODE()

2007-03-13 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160

Googling for
ASP AES
I found several promising results.


- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)
- - Original Message -
From: Neil Tompkins [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, March 13, 2007 3:45 AM
Subject: Re: ENCODE() and DECODE()



I'm using ASP.  Do you know any resources that I could use ?




From: Wm Mussatto [EMAIL PROTECTED]
To: Neil Tompkins [EMAIL PROTECTED]
Subject: Re: ENCODE() and DECODE()
Date: Mon, 12 Mar 2007 15:27:26 -0700 (PDT)

On Mon, March 12, 2007 15:05, Neil Tompkins said:
 Do you think these functions would be ideal for my requirements in
 terms
 of
 encrypting credit card numbers when stored in a database ?

Actually, no.  If you are working a language like perl look at encoding
the information and then storing it. I think encode and decode are too
weak.

in perl I use use Crypt::CBC; and then picked a strong cypher. If you use
perl, please go to cpan.org for the details.  BTW the reason for using
blob type it to avoid truncation. After its encoded removing trailing
spaces is a BAD THING.

Bill


From: Wm Mussatto [EMAIL PROTECTED]
To: Neil Tompkins [EMAIL PROTECTED]
CC: mysql@lists.mysql.com
Subject: Re: ENCODE() and DECODE()
Date: Mon, 12 Mar 2007 13:54:41 -0700 (PDT)

On Mon, March 12, 2007 13:04, Neil Tompkins said:
  Sorry Bill, I'm unsure what you mean by bin
My error, too early in the morning here 'blob'
 
 From: William R. Mussatto [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Subject: Re: ENCODE() and DECODE()
 Date: Mon, 12 Mar 2007 10:31:53 -0700 (PDT)
 
 On Mon, March 12, 2007 8:04, Neil Tompkins said:
   I've been researching the best method to store credit card
   numbers
 in
  a
   database which are saved encrypted.  My version of mysql is 3.23
 therefore
   I
   think the only function I could use is
  
   ENCODE() and DECODE().
  
   I've tried these functions and they appear to work as I want.
I've
 a
   couple
   of questions though, can I use varchar when saving the data and
are
 these
   functions suitable for my requirements ?
  
   Thanks,
   Neil
 use 'Bin' versions of fields since the encoded data may be binary.
 
 Bill
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
  _
  Txt a lot? Get Messenger FREE on your mobile.
  https://livemessenger.mobile.uk.msn.com/
 
 



 _
 Solve the Conspiracy and win fantastic prizes.
 http://www.theconspiracygame.co.uk/


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






_
Get Hotmail, News, Sport and Entertainment from MSN on your mobile.
http://www.msn.txt4content.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFF9rLrI7J99hVZuJcRA9CdAKCBhJ+do8Y3ouGozoBYdTnNR/py+QCgvYTs
YCKVLb0a7gkZPbNy73WtH2k=
=O8xx
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MD5()

2007-03-10 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160

Before you implement anything, I would do a lot of research about
encryption.  No offense, but from your question, it is clear that you know
next to nothing about encryption.  That's fine.  A few years ago, I was the
exact same way.  However, encryption is somewhat complex, so if you're going
to do it, you should do it right.  In my opinion, poorly implemented
encryption can be worse than no encryption at all.

You should also think carefully before storing customers credit card
numbers.  You should seriously consider whether or not it is really
necessary to do so.  If it is simply for the purpose of rebilling, a lot of
credit card processing API's will handle this for you.

Finally, what another poster said is right on the money.  If you are in a
shared hosting environment, do not even think about handling customers
credit cards.  There are too many ways security can be compromised on such
systems.  If that is in fact the case, I would suggest you outsource the
customer billing.  Remember also that security breaches of this type can
seriously damage the reputation of any organization.

Aaron


- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)
- - Original Message -
From: Neil Tompkins [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Saturday, March 10, 2007 4:17 AM
Subject: MD5()



I'm looking to use MD5() to encrypt credit card numbers.  How do I
unencrypt this when reading the value ?

Thanks,
Neil

_
Get Hotmail, News, Sport and Entertainment from MSN on your mobile.
http://www.msn.txt4content.com/


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFF8tlUI7J99hVZuJcRA+kXAKDEB6EU0QzTrTZu72wPUe+43Pi2TwCeKTFw
UDnwBp4Wwt+/n4YAn6SPfl4=
=HRjm
-END PGP SIGNATURE-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



/etc/my.cnf in 5.0.27 RHEL4 RPMS is broken!

2007-03-06 Thread Aaron Scamehorn

Description:
The /etc/my.cnf that is included with RHEL4 5.0.27 RPMS is
incompatibel with the /etc/init.d/mysql start scripts!

[EMAIL PROTECTED] rhel4]$ sudo rpm -Uveh MySQL*.rpm
Password:
warning: MySQL-client-standard-5.0.27-0.rhel4.i386.rpm: V3 DSA
signature: NOKEY,  key ID 5072e1f5
Preparing...###
[100%]
   1:MySQL-shared-compat###
[ 25%]
   2:MySQL-client-standard  ###
[ 50%]
   3:MySQL-devel-standard   ###
[ 75%]
   4:MySQL-server-standard  ###
[100%]
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:
/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h batman.cogcap.com password 'new-password'
See the manual for more instructions.

NOTE:  If you are upgrading from a MySQL = 3.22.10 you should run
the /usr/bin/mysql_fix_privilege_tables. Otherwise you will not be
able to use the new GRANT command!

Please report any problems with the /usr/bin/mysqlbug script!

The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at http://shop.mysql.com
Starting MySQLCouldn't find MySQL manager or server[FAILED]
How-To-Repeat:
run /etc/init.d/mysql again!
Fix:
Remove the following section from /etc/my.conf:
[mysql.server]
user=mysql
basedir=/var/lib

In /etc/init.d/mysql, parse_server_args sets $bindir=/var/lib/bin.
So, /usr/bin/mysqld_safe is not found.

Submitter-Id:  Aaron Scamehorn
Originator:
Organization:
 Cognitive Capital, LLC.
MySQL support: none 
Synopsis:  Bad /etc/my.cnf in RHEL4 5.0.27 RPMS
Severity:  critical 
Priority:  high
Category:  mysql
Class: support
Release:   mysql-5.0.27-standard (MySQL Community Edition -
Standard (GPL))

C compiler:gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
C++ compiler:  gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
Environment:

System: Linux batman.cogcap.com 2.6.9-42.0.8.ELsmp #1 SMP Tue Jan 30
12:33:47 EST 2007 i686 i686 i386 GNU/Linux
Architecture: i686

Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc
/usr/bin/cc
GCC: Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.6/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --with-system-zlib --enable-__cxa_atexit
--disable-libunwind-exceptions --enable-java-awt=gtk
--host=i386-redhat-linux
Thread model: posix
gcc version 3.4.6 20060404 (Red Hat 3.4.6-3)
Compilation info: CC='gcc'  CFLAGS='-O2 -g -pipe -m32 -march=i386
-mtune=pentium4'  CXX='gcc'  CXXFLAGS='-O2 -g -pipe -m32 -march=i386
-mtune=pentium4'  LDFLAGS=''  ASFLAGS=''
LIBC: 
lrwxrwxrwx  1 root root 13 Feb 13 15:29 /lib/libc.so.6 - libc-2.3.4.so
-rwxr-xr-x  1 root root 1512916 Aug 12  2006 /lib/libc-2.3.4.so
-rw-r--r--  1 root root 2418632 Aug 12  2006 /usr/lib/libc.a
-rw-r--r--  1 root root 204 Aug 12  2006 /usr/lib/libc.so
Configure command: ./configure '--disable-shared'
'--with-server-suffix=-standard' '--without-embedded-server'
'--with-innodb' '--with-archive-storage-engine' '--without-bench'
'--with-zlib-dir=bundled' '--with-big-tables' '--enable-assembler'
'--enable-local-infile' '--with-mysqld-user=mysql'
'--with-unix-socket-path=/var/lib/mysql/mysql.sock' '--with-pic'
'--prefix=/' '--with-extra-charsets=complex' '--with-yassl'
'--exec-prefix=/usr' '--libexecdir=/usr/sbin' '--libdir=/usr/lib'
'--sysconfdir=/etc' '--datadir=/usr/share'
'--localstatedir=/var/lib/mysql' '--infodir=/usr/share/info'
'--includedir=/usr/include' '--mandir=/usr/share/man'
'--enable-thread-safe-client' '--with-comment=MySQL Community Edition -
Standard (GPL)' '--with-readline' 'CC=gcc' 'CFLAGS=-O2 -g -pipe -m32
-march=i386 -mtune=pentium4' 'CXXFLAGS=-O2 -g -pipe -m32 -march=i386
-mtune=pentium4' 'CXX=gcc' 'LDFLAGS='



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [inbox] Re: Breaking Up Tables

2007-03-01 Thread Aaron Cannon
It's nothing to do with MySQL.  split is a command that you can find under 
many flavors of Linux, and Linux-like OSes.


At the shell prompt, type:
man split
to see the documentation.

However, if you have bzip2 on your system, I would recommend running that 
first, in order to compress it as much as possible.


So, to summarize, this is would I would do:

1. run mysqldump to dump my data and save it to a file, for instance, db.sql

2. type:
bzip -9 db.sql
This will give you a file named db.sql.bz2

3. If db.sql.bz2 is still too large, I would run split:
split bla bla bla
(Replace bla bla bla with the options for split.  I never use it, so I don't 
know what they are.)


This will give you a few smaller files named db.sql.bz2.1 db.sql.bz2.2 ETC.

4. Transfer these files to my other machine.

5. reassemble the smaller files into one big file:
cat db.sql.bz2.1 db.sql.bz2.2 db.sql.bz2.3 db.sql.bz2 \
db.sql.bz2

6. decompress the db.sql.bz2 file
bunzip2 db.sql.bz2

7. import db.sql into wherever.



Hope this helps.

Aaron


--
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail 
address.)
- Original Message - 
From: [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Thursday, March 01, 2007 9:15 AM
Subject: Re: [inbox] Re: Breaking Up Tables



-Original Message-
From: [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Thu, 1 Mar 2007 10:35 AM
Subject: Re: [inbox] Re: Breaking Up Tables


if you have shell access on the server, why not just use
split? Create your big dump file, split it into smaller
chunks and use cat on the other end to reassemble the pieces.
if you don't, just have someone at your host provider who does
do it.


Sounds perfect! I just spent 10 minutes trying to research splitting in 
the MySQL Ref. Man. and couldn't find anything. Can you point me, or give 
me a sample command? Also to re-stitch it together on the other end?

TIA,
Tony

AOL now offers free email to everyone.  Find out more about what's free 
from AOL at AOL.com.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch

2007-02-21 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

If it's any consolation, I got the exact same warnings.  However, I don't
know if it's normal either.


- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)
- - Original Message -
From: Jerry Schwartz [EMAIL PROTECTED]
To: 'Paul DuBois' [EMAIL PROTECTED]; 'Sun, Jennifer'
[EMAIL PROTECTED]; 'Dan Buettner' [EMAIL PROTECTED];
'Chris White' [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, February 21, 2007 8:45 AM
Subject: RE: MySQL Daylight Savings Time Patch



I just ran mysql_tzinfo_to_sql on a CentOS (Linux) system, and it
complained
about the various Riyadh time zones:

Warning: Unable to load '/usr/share/zoneinfo/right/Mideast/Riyadh87' as
time
zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/right/Mideast/Riyadh88' as
time
zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/right/Mideast/Riyadh89' as
time
zone. Skipping it.

etc.

Any idea whether or not this is normal?

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFF3GZ+I7J99hVZuJcRAvUUAJ4xHKNQtxYBSrpDqadTzPdBx3uQIwCfRZkL
uQ5ODv/bD5SN5CW9JpYIlxQ=
=z+FD
-END PGP SIGNATURE-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.1.14-beta with ssl build failure

2007-01-05 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

It is unfortunate that mysql does not offer a binary version with ssl
support for Linux.  Does anyone know why?  It can't be for export reasons,
as they do offer a windows version with SSL support.

Aaron Cannon



- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFFnniYI7J99hVZuJcRApGtAKD73Z3nSn4viL5mulFj0ijNjZWaOgCgrN2w
JN/foKnc4hmXwzoaiupjbr4=
=n7j+
-END PGP SIGNATURE-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data types and space needs

2007-01-05 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Actually, I think he was asking if the sized used by the storage engine
would change if you used for example int(2) as apposed to int(10).  My guess
is it would not, but that's just a guess.

Aaron Cannon


- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)
- - Original Message -
From: Dan Buettner [EMAIL PROTECTED]
To: Olaf Stein [EMAIL PROTECTED]
Cc: MySql mysql@lists.mysql.com
Sent: Friday, January 05, 2007 8:45 AM
Subject: Re: Data types and space needs



Olaf, not a silly question at all.  You can indeed save space by using
different forms of integer.

See
http://dev.mysql.com/doc/refman/5.0/en/numeric-type-overview.html
and
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html

From that second page:

Storage Requirements for Numeric Types

Data Type Storage Required
TINYINT 1 byte
SMALLINT 2 bytes
MEDIUMINT 3 bytes
INT, INTEGER 4 bytes
BIGINT 8 bytes

You may also be able to use UNSIGNED to extend the range of a column,
if you don't need to store negative values.

HTH,
Dan

On 1/5/07, Olaf Stein [EMAIL PROTECTED] wrote:

Hi All,

I have somewhat of a silly question.
If I define a column as int it needs 4 bytes.
Do I gain anything space wise if I restrict the length to e.g. 10,
int(10),
or is this only a logical restriction?


Thanks
Olaf



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFFnnnGI7J99hVZuJcRAkstAJsEw8S1ZxnEpL+oXvpDsTfKx3C34QCgpnNT
hd379sQHorwV3eV9NcYeq0E=
=WAXX
-END PGP SIGNATURE-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



select statement question

2007-01-03 Thread Aaron Cannon

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all.  I hope this is the right place to ask this.

I have two tables, A and B.  Each of these tables has a column with
integers.  The column in table A is C and the one in B is D.

I need a select statement that will return all records whose C value is not
in any row in D.

for example:
C =
1
2
3
4
5
6
7
8
11

D =
2
4
6
8
10

and the statement would return:
1
3
5
7
11

Probably an easy question for those of you more experienced but I have no
clew.

Thanks in advance.

Sincerely
Aaron Cannon


- --
Skype: cannona
MSN/Windows Messenger: [EMAIL PROTECTED] (don't send email to the hotmail
address.)

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (MingW32) - GPGrelay v0.959
Comment: Key available from all major key servers.

iD8DBQFFnBbLI7J99hVZuJcRAiF3AJ4mR4UjLa0sG+hIDbErj7LvuzfU4wCggEDh
DtnfmVsHL84me4qVw/mA4s8=
=l2gE
-END PGP SIGNATURE-


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Weird MySQL Connection Issues

2006-02-13 Thread Aaron Axelsen
We are running mysql on a Debian system, so we shouldn't have any of
those windows problems.  It's really quite confusing, there were no
mysql errors when we noticed the problem.

The only guess we currently have is to wait until it happens again, and
make sure we dump the stats to further investigate.

Any other suggestions are welcome, thanks!

-- Aaron

[EMAIL PROTECTED] wrote:
 If you are running MySQL on Windows, then I'm wondering 
 whether you are having a problem with running out of 
 available ports, for clients to connect to MySQL on.

 This may be your problem:

 From the manual, 2.3.16. MySQL on Windows Compared to MySQL 
 on Unix

 ***
 MySQL for Windows has proven itself to be very stable. The 
 Windows version of MySQL has the same features as the 
 corresponding Unix version, with the following exceptions: 

 Limited number of ports 

 Windows systems have about 4,000 ports available for client 
 connections, and after a connection on a port closes, it 
 takes two to four minutes before the port can be reused. In 
 situations where clients connect to and disconnect from the 
 server at a high rate, it is possible for all available 
 ports to be used up before closed ports become available 
 again. If this happens, the MySQL server appears to be 
 unresponsive even though it is running. Note that ports may 
 be used by other applications running on the machine as 
 well, in which case the number of ports available to MySQL 
 is lower. 

 For more information, see 
 http://support.microsoft.com/default.aspx?scid=kb;en-us;196271. 
 ***

 Keith 


 In theory, theory and practice are the same;
 In practice they are not. 

 On Sun, 12 Feb 2006, Aaron Axelsen wrote:

   
 To: mysql@lists.mysql.com
 From: Aaron Axelsen [EMAIL PROTECTED]
 Subject: Weird MySQL Connection Issues

 I have been experiencing some weird MySQL connection issues lately. 
 Twice now in that last couple weeks, there have been times where some
 mysql applications are working, and others are not working.  Both times
 the mysql connection limit was rather high.  A simple mysql restart has
 fixed the problem both times.

 There are a few forums using phpbb running on this server which get
 heavy access, which is most likely the cause of the problem.

 The version of mysql running is: 4.1.14

 Is there a known issue like this with alot of mysql connections?

 Does anyone have any related ideas or suggestions?  Thanks!

 -- 
 Aaron Axelsen
 [EMAIL PROTECTED]
 

   

-- 
Aaron Axelsen
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Weird MySQL Connection Issues

2006-02-12 Thread Aaron Axelsen
I have been experiencing some weird MySQL connection issues lately. 
Twice now in that last couple weeks, there have been times where some
mysql applications are working, and others are not working.  Both times
the mysql connection limit was rather high.  A simple mysql restart has
fixed the problem both times.

There are a few forums using phpbb running on this server which get
heavy access, which is most likely the cause of the problem.

The version of mysql running is: 4.1.14

Is there a known issue like this with alot of mysql connections?

Does anyone have any related ideas or suggestions?  Thanks!

-- 
Aaron Axelsen
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump

2005-11-15 Thread Aaron Morris

I have a very simple database I am trying to backup.

I run myslqdump and it gives me the code below.

Which does nothing but cause errors when I try to run it.

Thank you in advance for your help
-Aaron




-- MySQL Administrator dump 1.4
--
-- --
-- Server version   4.1.11-nt


/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;

/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS,
FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;


--
-- Create schema `inventory`
--

CREATE DATABASE /*!32312 IF NOT EXISTS*/ `inventory`;
USE `inventory`;

--
-- Table structure for table `inventory`.`bug_item`
--

DROP TABLE IF EXISTS `bug_item`;
CREATE TABLE `bug_item` (
  `id` int(4) NOT NULL default '0',
  `title` text NOT NULL,
  `description` text,
  `pageLink` text,
  `status_id` int(4) NOT NULL default '0',
  `user_id` int(4) NOT NULL default '0',
  PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

--
-- Dumping data for table `inventory`.`bug_item`
--

/*!4 ALTER TABLE `bug_item` DISABLE KEYS */;
/*!4 ALTER TABLE `bug_item` ENABLE KEYS */;


--
-- Table structure for table `inventory`.`bug_item_log`
--

DROP TABLE IF EXISTS `bug_item_log`;
CREATE TABLE `bug_item_log` (
  `log_id` int(4) NOT NULL default '0',
  `bug_id` int(4) NOT NULL default '0',
  `dt_timestamp` datetime NOT NULL default '-00-00 00:00:00',
  `title` text NOT NULL,
  `description` text,
  `pageLink` text,
  `status_id` int(4) NOT NULL default '0',
  `user_id` int(4) NOT NULL default '0',
  PRIMARY KEY  (`log_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

--
-- Dumping data for table `inventory`.`bug_item_log`
--

/*!4 ALTER TABLE `bug_item_log` DISABLE KEYS */;
/*!4 ALTER TABLE `bug_item_log` ENABLE KEYS */;


--
-- Table structure for table `inventory`.`bug_status`
--

DROP TABLE IF EXISTS `bug_status`;
CREATE TABLE `bug_status` (
  `status_id` int(4) NOT NULL default '0',
  `title` text NOT NULL,
  PRIMARY KEY  (`status_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

--
-- Dumping data for table `inventory`.`bug_status`
--

/*!4 ALTER TABLE `bug_status` DISABLE KEYS */;
/*!4 ALTER TABLE `bug_status` ENABLE KEYS */;


--
-- Table structure for table `inventory`.`bug_user`
--

DROP TABLE IF EXISTS `bug_user`;
CREATE TABLE `bug_user` (
  `user_id` int(4) NOT NULL default '0',
  `email` text NOT NULL,
  `password` varchar(50) NOT NULL default '',
  `user_name` varchar(200) default NULL,
  `phone` varchar(50) default NULL,
  PRIMARY KEY  (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

--
-- Dumping data for table `inventory`.`bug_user`
--

/*!4 ALTER TABLE `bug_user` DISABLE KEYS */;
/*!4 ALTER TABLE `bug_user` ENABLE KEYS */;


--
-- Table structure for table `inventory`.`inv_item`
--

DROP TABLE IF EXISTS `inv_item`;
CREATE TABLE `inv_item` (
  `inv_id` int(10) unsigned NOT NULL auto_increment,
  `title` varchar(150) default NULL,
  `description` text,
  `inv_list` varchar(150) default NULL,
  `inv_stock_num` varchar(150) default NULL,
  `inv_serial` varchar(150) default NULL,
  `inv_year` varchar(150) default NULL,
  `inv_make` varchar(150) default NULL,
  `inv_model` varchar(150) default NULL,
  `inv_color` varchar(150) default NULL,
  PRIMARY KEY  (`inv_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

--
-- Dumping data for table `inventory`.`inv_item`
--

/*!4 ALTER TABLE `inv_item` DISABLE KEYS */;
INSERT INTO `inv_item`
(`inv_id`,`title`,`description`,`inv_list`,`inv_stock_num`,`inv_serial`,`inv
_year`,`inv_make`,`inv_model`,`inv_color`) VALUES
 (2,'2000 Ford Truck','PSTRONGTHIS TRUCK IS THE CHIT/STRONG
/P\r\nH2FONT face=\Arial, Helvetica, sans-serif\ size=4Seriouely I
love this thing!/FONT/H2','$2','124','6587943156854','2005','4dr
Truck','EXT','Black'),
 (3,'2005 Jaguar S-Type','H3 class=subheaderWhat\'s New for the 2005
Jaguar S-Type?/H3\r\nDIV class=photoIMG height=100 alt=\2005 Jaguar
S-Type\
src=\http://us.autos1.yimg.com/img.autos.yahoo.com/i/buyersguide/small/s-ty
pe11.jpg\ width=200 /DIV\r\nDIV class=contentJaguar\'s 2005 S-Type
receives a muscular-looking redesign this year, as well as an updated
interior that features Bronze Madrona wood accents, a new instrument panel,
clearer displays, and available aluminum trim to replace the traditional
wood appliques. A new VDP Edition is available on the S-Type 4.2 V8, which
includes leather upholstery with contrasting piping, deep-pile foot well
rugs, heated seats, burl walnut trim, walnut and leather trim for the
steering wheel and shift knob, auto-leveling xenon headlamps, unique 17-inch
multi-spoke wheels and an electronic rear sunshade. The high-performance
2005 Jaguar S-Type R

Re: mysqldump

2005-11-15 Thread Aaron Morris
Right, that is what I am doing, but it does not work.

Have you used mysqldump successfully? 




- Original Message - 
From: gerald_clark [EMAIL PROTECTED]
To: Aaron Morris [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, November 15, 2005 1:40 PM
Subject: Re: mysqldump


 Aaron Morris wrote:
 
 I have a very simple database I am trying to backup.
 
 I run myslqdump and it gives me the code below.
   
 
 That is not code. It is a  file of sql statements.
 
 Which does nothing but cause errors when I try to run it.
   
 
 You can't  run it.
 It is input for the 'mysql' client program.
 mysql  thedumpfile.
 
 Thank you in advance for your help
 -Aaron
 
 
 
 
 -- MySQL Administrator dump 1.4
 --
 -- --
 -- Server version 4.1.11-nt
   
 
 
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Locking Issue

2005-08-11 Thread Aaron

Hi all ,

I have been experiencing intermittent locking issues with MYSQL. It 
appears that sometimes a query will lock reliease its lock, and is 
causing other queries to wait and wait until the connection limit is 
reached and i am locked out of the database. Has anyone ever had 
anything like this happen?


The setup:
Redhat 9.0 , Kernel 2,4,20-8smp
mysql-standard-4.1.7-pc-linux-i686-icc-glibc23
MyISAM Tables (And unless InnoDB can support fulltext or some other 
equivalent , migrating isnt an option at present)

ext2fs

Our Datbase Activity:
We have a somewhat active website.
Things run fairly smoothly for the most part , although we do have some 
slow queries from time to time.
We have far more selects than updates , but updates are still reasonably 
active.

Frequently , an update will get locked while a slower query is running.
Sometimes we can experience a large backup waiting for a slow query , 
but typically everything sorts out once the slow query finishes.
Rarely , however , a query will be in a locked state and will not let 
go of its lock. Subsequent updates lock , and subsequent selects lock.

Eventually , if the above has happened , the connection table will fill up.

We dont have any scripts that explicitly LOCK TABLES , aside from our 
backup script which uses mysqlhotcopy.
Is it possible that the mysqlhotcopy LOCK TABLES could interfere with 
the locking from the website activity?


I apologise for the vagueness of this request , I really dont know what 
direction would be best to further diagnose this.

If you have any advice , it would be greatly appreciated.

thanks for your time!
Aaron


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump question

2005-05-13 Thread Aaron Wohl
http://www.birdsoft.demon.co.uk/proglib/slowpipe.htm would seem to do
what you want... I havent tried it yet, but noted the URL for the next
time I needed that functionality.


- Original message -
From: Amit M Bhosle [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Date: Fri, 13 May 2005 09:18:00 +0530
Subject: mysqldump question

Hi:

 i was wondering if there's any way to limit the bandwidth used by
mysqldump to dump data from remote hosts. since i couldn't find any
documentation on this, i assume that mysqldump will use all the
available bandwidth of the network.

 the issue is that i'm looking to fetch data to the tune of 100s of
MBs, and i don't want the mysqldump to hog all the bandwidth, thus
adversely affecting other communication.

thx in advance for ur time.
AB

-- 
A great idea need not be complicated.
http://www.cs.ucsb.edu/~bhosle

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Question about indexing a highly searched table

2005-01-03 Thread Aaron
Hi all , 
 
I have a question about how to best approach indexing a table with many
variations of fields searched by, but one common ORDER BY field. 
 
Take for example this table (with only relevant searched fields shown):
CREAT TABLE Offers ( 
bid` mediumtext NOT NULL,
  `company_name` varchar(50) default NULL,
  `country` varchar(25) NOT NULL default '',
  `email` varchar(100) NOT NULL default '',
  `keywords` varchar(100) default NULL,
  `deletedate` date NOT NULL default '-00-00',
  `subcatID` int(10) unsigned NOT NULL default '0',
  `ID` int(10) unsigned NOT NULL auto_increment,
  `userID` int(10) unsigned NOT NULL default '0',
  FULLTEXT KEY `keywords` (`keywords`),
  FULLTEXT KEY `bid` (`bid`)
)
 
So , the question about indexes comes up. 
There are many variations of searches that will happen such as:
bid contains certain words
keywords contain certain words
subcatID IN (1,23,3,4,5,6,7,8,9,) etc.. 
subcatID IN (1,23,3,4,5,6,7,8,9,) and country='XX'
subcatID IN (1,23,3,4,5,6,7,8,9,) and country='XX' AND userID =
'12345678'
 
and so on. and all mixed together at points. 
 
subcatID is *usually* in the search criteria. 
everything is ordered by deletedate to get the most recent results
first. 
 
What should the thinking be when deciding how to best index this table
for speed of searches?
 
All advice welcome and appreciated!
 
Thanks,
Aaron
 
 
 
 
 
 


Why is this simple query so slow?

2004-12-10 Thread Aaron
Hi all , 
 
I have a relatively simple query that is taking longer than I think it
should. Can anyone possibly give me some idea why this might be or any
potential bottleneck areas I might want to check out?
 
thanks!
 
Here is some information. 
 
The query below takes around 8 seconds, and returns 3253 rows.
 
Mysql Version: 4.1.7-standard-log
Operating System: Linux 2.4.20-8smp
Query:SELECT ID FROM Offers_To_Buy FORCE INDEX(scdd) WHERE subcatID = 72
;
...
...
3253 rows in set (8.00 sec)
 
Explain says: 
mysql EXPLAIN SELECT ID FROM Offers_To_Buy FORCE INDEX(scdd) WHERE
subcatID = 72 ;
++-+---+--+---+--+--
---+---+--+-+
| id | select_type | table | type | possible_keys | key  |
key_len | ref   | rows | Extra   |
++-+---+--+---+--+--
---+---+--+-+
|  1 | SIMPLE  | Offers_To_Buy | ref  | scdd  | scdd |
4 | const | 2988 | Using where |
++-+---+--+---+--+--
---+---+--+-+
1 row in set (0.02 sec)
 
mysql SHOW INDEXES FROM Offers_To_Buy ; 
+---++-+--+-
+---+-+--++--++-
+
| Table | Non_unique | Key_name| Seq_in_index | Column_name
| Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
+---++-+--+-
+---+-+--++--++-
+
| Offers_To_Buy |  1 | ID  |1 | ID
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | oldtitle|1 | oldtitle
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | user|1 | userID
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | date|1 | deletedate
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | subcategory |1 | subcatID
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | country |1 | country
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | source  |1 | source
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | scdd|1 | subcatID
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | scdd|2 | deletedate
| A |NULL | NULL | NULL   |  | BTREE  |
|
| Offers_To_Buy |  1 | keywords|1 | keywords
| NULL  |NULL | NULL | NULL   | YES  | FULLTEXT   |
|
| Offers_To_Buy |  1 | bid |1 | bid
| NULL  |NULL | NULL | NULL   |  | FULLTEXT   |
|
+---++-+--+-
+---+-+--++--++-
+
11 rows in set (0.00 sec)
 
 
 
 
 
 
 
 


Re: Problem using debug switch with mysqlimport

2004-12-08 Thread Settles, Aaron
Sure enough... I just installed the latest 4.1 linux binaries, I didn't
realize that the server itself had to be compiled with the debug enabled
(although now that I realize that it makes complete sense).  It would be
nice if the documentation for mysqlimport would at least make note of this.
Thanks for the response.


Upgrading 3.23 to 4.1

2004-12-06 Thread Aaron E. Diehl
Hello All,

I'm having trouble upgrading 3.23. to 4.1.  Since I don't want to break
production, I'm trying to start a test instance on the machine.  The problem
I'm having seems to be a missing .frm file.  The following details
invocation and the log file.  Any help would be greatly appreciated.

Thanks in advance,

Aaron

/opt/csw/mysql4/bin/mysqld_safe --basedir=/opt/csw/mysql4
--datadir=/opt/csw/mysql4/data --port=3307 

Log File:
041206 11:19:53  mysqld started
041206 11:19:53  InnoDB: Started
041206 11:19:53  Fatal error: Can't open privilege tables: Can't find file:
'./mysql/host.frm' (errno: 13)
041206 11:19:54  Aborting

041206 11:19:54  InnoDB: Starting shutdown...
041206 11:19:56  InnoDB: Shutdown completed
041206 11:19:56  /opt/csw/mysql4/libexec/mysqld: Shutdown Complete

041206 11:19:56  mysqld ended


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Newbie, MySQL test fails, spent hours, please help

2004-12-06 Thread Aaron Ford
Hey.

The code that I'm trying to get to work is as follows.  The problem is
with the 7th, 8th, and 9th lines...

html
head
titleTest MySQL/title
body
!-- mysql_up.php --
?php
$host=;
$user=;
$password= ;

mysql_connect($host,$user,$password);
$sql=show status;
$result = mysql_query($sql);
if ($result == 0)
   echo(bError  . mysql_errno() . :  . mysql_error() . /b);
elseif (mysql_num_rows($result) == 0)
   echo(bQuery executed successfully!/b);
else
{
?
!-- Table that displays the results --
table border=1
  trtdbVariable_name/b/tdtdbValue/b/td/tr
  ?php
for ($i = 0; $i  mysql_num_rows($result); $i++) {
  echo(TR);
  $row_array = mysql_fetch_row($result);
  for ($j = 0; $j  mysql_num_fields($result); $j++) {
echo(TD . $row_array[$j] . /td);
}
echo(/tr);
}
  ?
/table
?php } ?
/body
/html

The server that I'm uploading to is www.allbutnothing.com.  I know my
computer name, is the correct form for the host value
computername.webserver.com??  How do I find my username and password,
the only one I entered was the one when MySQLadmin.exe installed...  is
that the username and password that should be used?

Thanks very much.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem using debug switch with mysqlimport

2004-12-06 Thread Settles, Aaron
I'm trying to utilize the debug switch with mysqlimport so that I can figure
out why I'm getting errors on the data I'm importing, but I have yet to
figure out a way to do this.  I've tried to read the sparse documentation
concerning this feature and no debug file is ever produced.  I've tried
using it as indicated by the mysqlimport --help documentation such as:

 /usr/local/mysql/bin/mysqlimport --debug=d:t:o,mysql.dbg -uuser
-ppassword --local database import_file

As well as many variations of this.  I've been unable to locate through
Google or anything else anyone who has successfully utilized this feature.
Does anyone out there have any input?

Thanks.

Aaron


serious bug in 4.1: doubles failing equality tests

2004-11-24 Thread Aaron J. Mackey
This is MySQL 4.1.5-gamma-max on Darwin Kernel Version 7.6.0 (OS X 
v10.3.6), installed via the Mac OS X installer.

mysql select gid, fbin from fdata where gid = 3557 order by fbin;
+--+--+
| gid  | fbin |
+--+--+
| 3557 |  1000.007086 |
| 3557 | 1.000708 |
| 3557 | 1.000708 |
| 3557 | 1.000708 |
+--+--+
4 rows in set (0.00 sec)
mysql select gid, fbin from fdata where fbin between 1.000708 and 
1.000709;
+--+--+
| gid  | fbin |
+--+--+
| 3442 | 1.000709 |
| 3442 | 1.000709 |
| 3558 | 1.000709 |
| 3558 | 1.000709 |
+--+--+
4 rows in set (0.13 sec)

Note that none of these rows have an fbin of 1.000708, which is the 
fbin of the last three rows from the first result; furthermore, if I 
try to extract those three rows by their fbin value directly:

mysql select gid, fbin from fdata where fbin = 1.000708;
Empty set (0.00 sec)
Yet:
mysql select 1.000708 BETWEEN 1.000708 AND 1.000709;
++
| 1.000708 BETWEEN 1.000708 AND 1.000709 |
++
|  1 |
++
1 row in set (0.00 sec)
The definition of the fdata table is:
CREATE TABLE fdata (
  fid int(11) NOT NULL auto_increment,
  fref varchar(100) NOT NULL default '',
  fstart int(10) unsigned NOT NULL default '0',
  fstop int(10) unsigned NOT NULL default '0',
  fbin double(20,6) NOT NULL default '0.00',
  ftypeid int(11) NOT NULL default '0',
  fscore float default NULL,
  fstrand enum('+','-') default NULL,
  fphase enum('0','1','2') default NULL,
  gid int(11) NOT NULL default '0',
  ftarget_start int(10) unsigned default NULL,
  ftarget_stop int(10) unsigned default NULL,
  PRIMARY KEY  (fid),
  UNIQUE KEY fref (fref,fbin,fstart,fstop,ftypeid,gid),
  KEY ftypeid (ftypeid),
  KEY gid (gid)
) TYPE=MyISAM;
Increasing the precision of fbin from 6 to 8 did not fix the problem; 
upgrading to the latest 4.1.7 did not fix the problem; downgrading to 
4.0.22 did fix the problem.

I'd very much like to upgrade back to 4.1 series, as the improvement to 
filesort seems to greatly affect my application ... I'm quite willing 
to apply a software patch locally to fix the problem.

Thanks,
-Aaron
--
Aaron J. Mackey, Ph.D.
Dept. of Biology, Goddard 212
University of Pennsylvania   email:  [EMAIL PROTECTED]
415 S. University Avenue office: 215-898-1205
Philadelphia, PA  19104-6017 fax:215-746-6697
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Tables Crash when I Delete A Row

2004-11-08 Thread Aaron
Hi Gleb , thanks for the answer. Thats exactly what the problem was. I 
needed to rebuild the fulltext indexes on my tables. After I did that , the 
problem seems to have magically disappeared. :)

Cheers,
Aaron
- Original Message - 
From: Gleb Paharenko [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, November 07, 2004 5:23 AM
Subject: Re: Tables Crash when I Delete A Row


Hi.
From which version did you upgrade? If you upgraded from 4.0 you should
carefully read:
 http://dev.mysql.com/doc/mysql/en/Upgrading-from-4.0.html
May be REPAIR with USE_FRM will be helpful. Some times after ugrade tables 
have to be rebuilt or repaired.

Aaron [EMAIL PROTECTED] wrote:
[-- text/plain, encoding quoted-printable, charset: iso-8859-1, 122 
lines --]

Hi all , I am having some issues with 4.1.7 on Redhat 9 Kernel 
2.4.20-8SMP

The other day I upgraded to RH9 and then put on: 
mysql-standard-4.1.7-pc-linux-i686-icc-glibc23

Since then , it seems that one of (not all!) our tables has taken to 
crashing when you delete records from it. It spits out the following 
error:

Incorrect key file for table 'Offers_To_Sell'; try to repair it
Ok , well If I do a CHECK TABLE , I get this:
mysql CHECK TABLE Offers_To_Sell ;
+-+---+--+-+
| Table   | Op| Msg_type | Msg_text 
|
+-+---+--+-+
| Offers_To_Sell | check | warning  | Table is marked as crashed  |
| Offers_To_Sell | check | error| Found 265847 keys of 265850 |
| Offers_To_Sell | check | error| Corrupt |
+-+---+--+-+
3 rows in set (2.80 sec)

Now , if I try to REPAIR , I get this:
mysql REPAIR TABLE Offers_To_Sell ;
+-++--+-+
| Table   | Op | Msg_type | Msg_text 
|
+-++--+-+
| worldbid.Offers_To_Sell | repair | error| 2 when fixing table 
|
| worldbid.Offers_To_Sell | repair | error| Can't copy 
datafile-header to tempfile, error 9 |
| worldbid.Offers_To_Sell | repair | status   | Operation failed 
|
+-++--+-+
3 rows in set (2 min 5.49 sec)

If I shell out and use myisamchk --quick , I get this:
myisamchk --quick --tmpdir=/var/tmp Offers_To_Sell
Checking MyISAM file: Offers_To_Sell
Data records:   0   Deleted blocks:   0
myisamchk: warning: Table is marked as crashed and last repair failed
- check file-size
myisamchk: warning: Size of indexfile is: 207900672  Should be: 
19564544
myisamchk: warning: Size of datafile is: 200488316   Should be: 
200488292
- check record delete-chain
- check key delete-chain
- check index reference
- check data record references index: 1
myisamchk: error: Found 265850 keys of 0
- check record links
myisamchk: error: Record-count is not ok; is 265850   Should be: 0
myisamchk: warning: Found 265850 partsShould be: 1 
parts
MyISAM-table 'Offers_To_Sell' is corrupted
Fix it using switch -r or -o

Then when I do a -r I get this:
myisamchk -r --verbose Offers_To_Sell.MYI
- recovering (with sort) MyISAM-table 'Offers_To_Sell.MYI'
Data records: 47344
- Fixing index 1
 - Searching for keys, allocating buffer for 174743 keys
 - Dumping 47344 keys
- Fixing index 2
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 3
 - Searching for keys, allocating buffer for 19239 keys
 - Last merge and dumping keys
- Fixing index 4
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 5
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 6
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 7
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 8
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 9
 - Searching for keys, allocating buffer for 47345 keys
 - Dumping 47344 keys
- Fixing index 10
 - Searching for keys, allocating buffer for 45574 keys
myisamchk: error: 22 when fixing table
MyISAM-table 'Offers_To_Sell.MYI' is not fixed because of errors
Try fixing it by using the --safe-recover (-o), the --force (-f) option 
or by not using the --quick (-q) flag

Then, I can fix the problem by using the --safe-recover option , but as 
soon as a delete is done on the table , it corrupts again.

Anyone have any ideas?
Tanks,
Aaron





--
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
  __  ___ ___   __
 /  |/  /_ __/ __/ __ \/ /Gleb Paharenko

Error 1034 - FULLTEXT Related?

2004-11-07 Thread Aaron
Hello ,  i recently posted a message regarding table corruption upon deletes. 

I have done more poking around and notice that Myisamchk dies upon trying to 
check the FULLTEXT Index in my last message. I then see that 1034 error is 
popping up after a myisamchk -o fixes the table , and I try to delete a record 
again. 

I have also noticed that the only tables that are corrupting are ones that have 
fulltext indexes. 
Here is some more info , if anyone can help , it would be greatly appreciated. 
Should I consider rebuilding the full text indexes?

Thanks,
Aaron

mysql DELETE FROM Offers_To_Buy WHERE ID=1601598 ;
ERROR 1034 (HY000): Incorrect key file for table 'Offers_To_Buy_Mirror'; try to 
repair it

System Info:
Red Hat 9.0 Kernel 2.4.20-SMP 
mysql-standard-4.1.7-pc-linux-i686-icc-glibc23
1 Gig Ram 
Legacy SCSI Raid Array (Raid 5) 

Indexes on said table:
mysql SHOW INDEXES FROM Offers_To_Buy_Mirror ;
+--++-+--+-+---+-+--++--++-+
| Table| Non_unique | Key_name| Seq_in_index | Column_name 
| Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+--++-+--+-+---+-+--++--++-+
| Offers_To_Buy_Mirror |  1 | ID  |1 | ID  
| A |   53710 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | oldtitle|1 | oldtitle
| A |   53710 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | user|1 | userID  
| A | 655 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | date|1 | deletedate  
| A | 202 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | subcategory |1 | subcatID
| A |1096 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | country |1 | country 
| A | 166 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | source  |1 | source  
| A |  30 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | scdd|1 | subcatID
| A |1096 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | scdd|2 | deletedate  
| A |   26855 | NULL | NULL   |  | BTREE  | |
| Offers_To_Buy_Mirror |  1 | keywords|1 | keywords
| NULL  |   26855 | NULL | NULL   | YES  | FULLTEXT   | |
| Offers_To_Buy_Mirror |  1 | bid |1 | bid 
| NULL  |   26855 | NULL | NULL   |  | FULLTEXT   | |
+--++-+--+-+---+-+--++--++-+
11 rows in set (0.01 sec)


Tables Crash when I Delete A Row

2004-11-06 Thread Aaron
Hi all , I am having some issues with 4.1.7 on Redhat 9 Kernel 2.4.20-8SMP 

The other day I upgraded to RH9 and then put on: 
mysql-standard-4.1.7-pc-linux-i686-icc-glibc23

Since then , it seems that one of (not all!) our tables has taken to crashing 
when you delete records from it. It spits out the following error:

Incorrect key file for table 'Offers_To_Sell'; try to repair it

Ok , well If I do a CHECK TABLE , I get this:

mysql CHECK TABLE Offers_To_Sell ;
+-+---+--+-+
| Table   | Op| Msg_type | Msg_text|
+-+---+--+-+
| Offers_To_Sell | check | warning  | Table is marked as crashed  |
| Offers_To_Sell | check | error| Found 265847 keys of 265850 |
| Offers_To_Sell | check | error| Corrupt |
+-+---+--+-+
3 rows in set (2.80 sec)

Now , if I try to REPAIR , I get this:
mysql REPAIR TABLE Offers_To_Sell ;
+-++--+-+
| Table   | Op | Msg_type | Msg_text
|
+-++--+-+
| worldbid.Offers_To_Sell | repair | error| 2 when fixing table 
|
| worldbid.Offers_To_Sell | repair | error| Can't copy datafile-header to 
tempfile, error 9 |
| worldbid.Offers_To_Sell | repair | status   | Operation failed
|
+-++--+-+
3 rows in set (2 min 5.49 sec)

If I shell out and use myisamchk --quick , I get this:
myisamchk --quick --tmpdir=/var/tmp Offers_To_Sell
Checking MyISAM file: Offers_To_Sell
Data records:   0   Deleted blocks:   0
myisamchk: warning: Table is marked as crashed and last repair failed
- check file-size
myisamchk: warning: Size of indexfile is: 207900672  Should be: 19564544
myisamchk: warning: Size of datafile is: 200488316   Should be: 200488292
- check record delete-chain
- check key delete-chain
- check index reference
- check data record references index: 1
myisamchk: error: Found 265850 keys of 0
- check record links
myisamchk: error: Record-count is not ok; is 265850   Should be: 0
myisamchk: warning: Found 265850 partsShould be: 1 parts
MyISAM-table 'Offers_To_Sell' is corrupted
Fix it using switch -r or -o

Then when I do a -r I get this:

myisamchk -r --verbose Offers_To_Sell.MYI
- recovering (with sort) MyISAM-table 'Offers_To_Sell.MYI'
Data records: 47344
- Fixing index 1
  - Searching for keys, allocating buffer for 174743 keys
  - Dumping 47344 keys
- Fixing index 2
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 3
  - Searching for keys, allocating buffer for 19239 keys
  - Last merge and dumping keys
- Fixing index 4
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 5
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 6
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 7
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 8
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 9
  - Searching for keys, allocating buffer for 47345 keys
  - Dumping 47344 keys
- Fixing index 10
  - Searching for keys, allocating buffer for 45574 keys
myisamchk: error: 22 when fixing table
MyISAM-table 'Offers_To_Sell.MYI' is not fixed because of errors
Try fixing it by using the --safe-recover (-o), the --force (-f) option or by 
not using the --quick (-q) flag

Then, I can fix the problem by using the --safe-recover option , but as soon as 
a delete is done on the table , it corrupts again. 

Anyone have any ideas?

Tanks,
Aaron









4.1 Crashing upon running mysqld_Safe

2004-10-28 Thread Aaron
Hi all , 

I am trying to install the latest 4.17-standard Linux release on our server and and it 
gets caught in an endless loop of crashes.

I have read the error logs , and followed the stack trace advice from the docs , but 
to be honest Im not sure how to interpret what it reports. 

Here is what resolve_stack_dump says:

0x808af93 func_name__C12Item_sum_std + 3
0x82d6de8 _end + 1123528
0x82c0b95 _end + 1032821
0x807d384 (?)
0x80b9aa0 check_for_max_user_connections__FPCciT0 + 244
0x80e1e2d replace__t13List_iterator1Zt4List1Z4ItemRt4List1Zt4List1Z4Item + 57
0x808c550 val__15Item_func_round + 228
0x82deed4 _end + 1156532
0x8048101 (?)

Does anyone know what this means exactly? 

If it helps , we're running Red Hat 6.2 , Linux 2.2.19-6.2.12smp . I got it running on 
a similarly configured machine with no problems , so Im kinna stumped. 

Thanks!
Aaron



Slow ORDER BY query..

2004-10-25 Thread Aaron
Hi all , 

I am currently experiencing an issue with a query I would have thought to be somewhat 
straightforward. Perhaps someone could shed some light on what might be causing this? 
The below example was running by itself , not waiting for any other queries. It just 
took a bloody long time to run. The system load went to around 7 or so , however the 
CPU's were not taxed at all. 

Of curious note to me , is that it seems to be intermittently taking a long time. Upon 
restarting of the server and flushing the cache , some queries will take  1 second 
, some will take around 5 , and some will take ridiculously long times. I think that 
has to do perhaps with the amount of rows matching before the ORDER BY? 

I've included all the information I can think of below if anyone feels like having a 
look, It would be be greatly appreciated. 

Thanks!
Aaron

~~

MySQL Version:
~~
MySQL 4.0.18 on RedHat Linux

The Query 

mysql SELECT ID FROM Offers_To_Buy WHERE subcatID = 336 ORDER BY deletedate DESC 
LIMIT 20 ; 
+-+
| ID  |
+-+
| 1653497 |
| 1653498 |
| 1653506 |
| 1652861 |
| 1652685 |
| 1652784 |
| 1651739 |
| 1650276 |
| 1650323 |
| 1649569 |
| 1649079 |
| 1649228 |
| 1649410 |
| 1649411 |
| 1648444 |
| 1648543 |
| 1648877 |
| 1648897 |
| 1648911 |
| 1648308 |
+-+
20 rows in set (2 min 52.20 sec)

Record Count:
mysql SELECT count(1) FROM Offers_To_Buy ;
+--+
| count(1) |
+--+
|   461216 |
+--+
1 row in set (0.00 sec)

Explain Output:
mysql EXPLAIN SELECT ID FROM Offers_To_Buy WHERE subcatID = 336 ORDER BY deletedate 
DESC LIMIT 20 ;
+---+--+--+-+-+---+--+-+
| table | type | possible_keys| key | key_len | ref   | rows | 
Extra   |
+---+--+--+-+-+---+--+-+
| Offers_To_Buy | ref  | subcategory,scdd | subcategory |   4 | const | 8562 | 
Using where; Using filesort |
+---+--+--+-+-+---+--+-+
1 row in set (0.00 sec)

The Table:

mysql describe Offers_To_Buy ;
+--+--+--+-+++
| Field| Type | Null | Key | Default| Extra  |
+--+--+--+-+++
| bid  | mediumtext   |  | MUL |||
| company_name | varchar(50)  | YES  | | NULL   ||
| street_address   | varchar(100) | YES  | | NULL   ||
| city | varchar(25)  | YES  | | NULL   ||
| province | varchar(25)  | YES  | | NULL   ||
| country  | varchar(25)  |  | MUL |||
| postal_code  | varchar(10)  | YES  | | NULL   ||
| phone| varchar(50)  | YES  | | NULL   ||
| fax  | varchar(50)  | YES  | | NULL   ||
| email| varchar(100) |  | |||
| contact_name | varchar(50)  | YES  | | NULL   ||
| keywords | varchar(100) | YES  | MUL | NULL   ||
| URL  | varchar(200) | YES  | | NULL   ||
| obtain_documents | varchar(50)  | YES  | | NULL   ||
| cost | float(10,2)  | YES  | | NULL   ||
| deletedate   | date |  | MUL | -00-00 ||
| bidvalue | float(10,2)  | YES  | | NULL   ||
| country_dest | varchar(25)  | YES  | | NULL   ||
| subcatID | int(10) unsigned |  | MUL | 0  ||
| ID   | int(10) unsigned |  | MUL | NULL   | auto_increment |
| source   | varchar(30)  |  | MUL |||
| approved_by  | varchar(30)  | YES  | | NULL   ||
| oldtitle | varchar(100) |  | MUL |||
| Closed   | tinyint(4)   |  | | 0  ||
| userID   | int(10) unsigned |  | MUL | 0  ||
| image| varchar(30)  | YES  | | NULL   ||
| postDate | date | YES  | | NULL   ||
| blank5   | char(1)  | YES  | | NULL

Table Lock Delays and Connection Pooling

2004-10-18 Thread Aaron
Hi all , 

I have a quick question regarding table locking.

This is a snippet referring to  when table locking is disadvantageous:
Another client issues another SELECT statement on the same table. Because UPDATE has 
higher priority than SELECT, this SELECT will wait for the UPDATE to finish. It will 
also wait for the first SELECT to finish!

So what constitutes a new client exactly? We use Perl and DBI to connect to MySQL. 
Does this mean that everytime we connect to the DBase it is considered a new client? 
If so , would some form of connection pooling/caching help reduce the lock delays on a 
slow SELECT statement?

Thanks !
Aaron



Why this query doesn't group the email addresses?

2004-07-13 Thread Aaron Wolski
Hey all,

Got this query:

SELECT first,last,email FROM CustomerTable AS t1,
OrderTable AS t2, CartTable AS t3 WHERE t2.cart_id=t3.cart_id
AND t1.id=t2.customer_index AND t3.submitted='1' AND
t3.product_index='1' AND t3.quantity0
GROUP BY t1.email ORDER BY t1.first,t1.last

For some strange reason it doesn't seem to group the email addresses.
I'd be hard pressed to find every occurrence out of 1000 records, but I
DID quickly spot two exact same records which means the email address
was not grouped.

What can I do or where did I go wrong?

Thanks!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why this query doesn't group the email addresses?

2004-07-13 Thread Aaron Wolski
Yes sir. Exactly!

A

 -Original Message-
 From: Victor Pendleton [mailto:[EMAIL PROTECTED]
 Sent: July 13, 2004 11:15 AM
 To: 'Aaron Wolski '; '[EMAIL PROTECTED] '
 Subject: RE: Why this query doesn't group the email addresses?
 
 Were the records exactly the same?
 
 -Original Message-
 From: Aaron Wolski
 To: [EMAIL PROTECTED]
 Sent: 7/13/04 10:04 AM
 Subject: Why this query doesn't group the email addresses?
 
 Hey all,
 
 Got this query:
 
 SELECT first,last,email FROM CustomerTable AS t1,
 OrderTable AS t2, CartTable AS t3 WHERE t2.cart_id=t3.cart_id
 AND t1.id=t2.customer_index AND t3.submitted='1' AND
 t3.product_index='1' AND t3.quantity0
 GROUP BY t1.email ORDER BY t1.first,t1.last
 
 For some strange reason it doesn't seem to group the email addresses.
 I'd be hard pressed to find every occurrence out of 1000 records, but
I
 DID quickly spot two exact same records which means the email address
 was not grouped.
 
 What can I do or where did I go wrong?
 
 Thanks!
 
 Aaron
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



all upper case records.. Keeping first char upper and rest lower?

2004-07-13 Thread Aaron Wolski
Hey guys,

I have a column in a table called 'first'.

Currently all records are upper case. Is it possible for me to do a
select statement that makes all chars after the first char lower case?

Example:

Current: AARON
After: Aaron


I think this is possible.. just don't know how to execute the functions
together to make it happen.

Thanks!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: all upper case records.. Keeping first char upper and rest lower?

2004-07-13 Thread Aaron Wolski
Hi there,

Thanks for the help!

Someone else also gave me a similar query which did exactly as yours
did. But with the same minor problem

In this table.. I also have columns for last and email. So the columns
are first,last,email.

The problem with both yours and this other person's query is that is
groups all the columns (first,last,email) into one column.

What I would like is just the 'first' column.

Something tells me this isn't possible?

Thanks to you as well for the help. I wouldn't have figured it out for
myself that's for sure.

Aaron

 -Original Message-
 From: Wesley Furgiuele [mailto:[EMAIL PROTECTED]
 Sent: July 13, 2004 1:10 PM
 To: Aaron Wolski
 Cc: [EMAIL PROTECTED]
 Subject: Re: all upper case records.. Keeping first char upper and
rest
 lower?
 
 
 SELECT CONCAT( UPPER( LEFT( first, 1 ) ), LOWER( RIGHT( first, LENGTH(
 first ) - 1 ) ) ) AS `first` FROM table
 
 
 On Jul 13, 2004, at 12:51 PM, Aaron Wolski wrote:
 
  Hey guys,
 
  I have a column in a table called 'first'.
 
  Currently all records are upper case. Is it possible for me to do a
  select statement that makes all chars after the first char lower
case?
 
  Example:
 
  Current: AARON
  After: Aaron
 
 
  I think this is possible.. just don't know how to execute the
functions
  together to make it happen.
 
  Thanks!
 
  Aaron
 
 
 
  --
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Why this query doesn't group the email addresses?

2004-07-13 Thread Aaron Wolski
varchar, sir.



 -Original Message-
 From: Wesley Furgiuele [mailto:[EMAIL PROTECTED]
 Sent: July 13, 2004 1:14 PM
 To: Aaron Wolski
 Cc: [EMAIL PROTECTED]
 Subject: Re: Why this query doesn't group the email addresses?
 
 What type of field is the email field?
 
 Wes
 
 On Jul 13, 2004, at 11:04 AM, Aaron Wolski wrote:
 
  Hey all,
 
  Got this query:
 
  SELECT first,last,email FROM CustomerTable AS t1,
  OrderTable AS t2, CartTable AS t3 WHERE t2.cart_id=t3.cart_id
  AND t1.id=t2.customer_index AND t3.submitted='1' AND
  t3.product_index='1' AND t3.quantity0
  GROUP BY t1.email ORDER BY t1.first,t1.last
 
  For some strange reason it doesn't seem to group the email
addresses.
  I'd be hard pressed to find every occurrence out of 1000 records,
but I
  DID quickly spot two exact same records which means the email
address
  was not grouped.
 
  What can I do or where did I go wrong?
 
  Thanks!
 
  Aaron
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Hi all,

Having a problem with a query that's returning 486,057 results when it
most definitely should NOT be doing that.

I have two tables:

1 for a list of customers that purchase product A, another for customers
who purchased product B.

Columns are:

Id
First
Last
Email

I am trying to compare table 1 to table 2 to get a result set that gives
me the contact info (table columns) for those whose email addresses in
table 1 DON'T EQUAL those in table two.

In table one I have 2026 records
In table two I have 240 records

The query is this:

SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
t1.email != t2.email

When I do this query. I get 486,057 results returne.

Where am I going wrong? Any ideas?

Thanks so much for the help!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Hi all,

First... I just want tot hank everyone for their help and explanations
of how I was going wrong, and the measures to correct my logic!

Great, great advice.

Shawn's solution worked absolutely wonderful for my needs.

My next question is how do I reverse the query so that I can get all of
those customers who DO have email address that matches in each table?

Thanks again guys. Very much appreciated!

Aaron

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: July 9, 2004 12:17 PM
 To: Aaron Wolski
 Cc: [EMAIL PROTECTED]
 Subject: Re: anyone help with this query? Returning to many
results
 
 
 You have written a cross-product join. This is what happened but with
a
 much smaller example:
 
 Assume you have two tables: Colors and Sizes
 
 CREATE TABLE Colors (
   id int auto_increment primary key
   , name varchar(10)
 );
 
 CREATE TABLE Sizes (
   id int auto_increment primary key
   , abbr varchar(6)
 );
 
 And you populate them with the following data:
 
 INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
 INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');
 
 
 This query:
 SELECT colors.name, sizes.abbr FROM Colors, Sizes;
 
 Returns:
 ++--+
 | name   | abbr |
 ++--+
 | Red| XS   |
 | Blue   | XS   |
 | Yellow | XS   |
 | Violet | XS   |
 | Red| M|
 | Blue   | M|
 | Yellow | M|
 | Violet | M|
 | Red| L|
 | Blue   | L|
 | Yellow | L|
 | Violet | L|
 | Red| XL   |
 | Blue   | XL   |
 | Yellow | XL   |
 | Violet | XL   |
 | Red| XXL  |
 | Blue   | XXL  |
 | Yellow | XXL  |
 | Violet | XXL  |
 ++--+
 20 rows in set (0.04 sec)
 
 Notice that every possible combination between color and size is
listed?
 When you wrote your query, you also asked the query engine to create
every
 possible combination between each customer in the first table and
every
 customer in the second table. That resulted in 486,240 matches. Then
the
 engine applied your WHERE condition to all of those matches and
ELIMINATED
 of all of the rows where the email addresses were the SAME between the
two
 tables so you wound up with *only* 486,057 combinations of customers
 between the two tables where their addresses were different.
 
 I think what you wanted to find was all of the rows in one table that
 didn't match any rows in the other table. You can do it with this
 statement:
 
 SELECT a.ID, a.First, a.Last, a.Email
 FROM producta_customers a
 LEFT JOIN productb_customers b
   ON a.email=b.email
 WHERE b.id is null
 
 This will give you all of the records in producta_customers that DO
NOT
 have a matching email address in the productb_customers table.
 
 Yours,
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
 
 
   Aaron Wolski
   [EMAIL PROTECTED]To:
 [EMAIL PROTECTED]
   z.com   cc:
Fax to:
   07/09/2004 11:33 Subject:  anyone help
with
 this query? Returning to many results
   AM
 
 
 
 
 
 
 Hi all,
 
 Having a problem with a query that's returning 486,057 results when it
 most definitely should NOT be doing that.
 
 I have two tables:
 
 1 for a list of customers that purchase product A, another for
customers
 who purchased product B.
 
 Columns are:
 
 Id
 First
 Last
 Email
 
 I am trying to compare table 1 to table 2 to get a result set that
gives
 me the contact info (table columns) for those whose email addresses in
 table 1 DON'T EQUAL those in table two.
 
 In table one I have 2026 records
 In table two I have 240 records
 
 The query is this:
 
 SELECT * FROM producta_customers AS t1, productb_customers AS t2 WHERE
 t1.email != t2.email
 
 When I do this query. I get 486,057 results returne.
 
 Where am I going wrong? Any ideas?
 
 Thanks so much for the help!
 
 Aaron
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: anyone help with this query? Returning tooooo many results

2004-07-09 Thread Aaron Wolski
Well well...

That worked too!

Damn... this is starting to make life easier :)

Thanks again. Very much appreciated!!!

Aaron

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: July 9, 2004 2:00 PM
 To: Aaron Wolski
 Cc: [EMAIL PROTECTED]
 Subject: RE: anyone help with this query? Returning to many
results
 
 
 Aaron,
 
 That would be an INNER JOIN situation:
 
 SELECT a.ID, a.First, a.Last, a.Email
 FROM producta_customers a
 INNER JOIN productb_customers b
ON a.email=b.email
 
 Yours,
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
 
 
 
 
   Aaron Wolski
   [EMAIL PROTECTED]To:
 [EMAIL PROTECTED]
   z.com   cc:
 [EMAIL PROTECTED]
Fax to:
   07/09/2004 01:10 Subject:  RE: anyone
help
 with this query? Returning to many
   PMresults
 
 
 
 
 
 
 Hi all,
 
 First... I just want tot hank everyone for their help and explanations
 of how I was going wrong, and the measures to correct my logic!
 
 Great, great advice.
 
 Shawn's solution worked absolutely wonderful for my needs.
 
 My next question is how do I reverse the query so that I can get all
of
 those customers who DO have email address that matches in each table?
 
 Thanks again guys. Very much appreciated!
 
 Aaron
 
  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Sent: July 9, 2004 12:17 PM
  To: Aaron Wolski
  Cc: [EMAIL PROTECTED]
  Subject: Re: anyone help with this query? Returning to many
 results
 
 
  You have written a cross-product join. This is what happened but
with
 a
  much smaller example:
 
  Assume you have two tables: Colors and Sizes
 
  CREATE TABLE Colors (
id int auto_increment primary key
, name varchar(10)
  );
 
  CREATE TABLE Sizes (
id int auto_increment primary key
, abbr varchar(6)
  );
 
  And you populate them with the following data:
 
  INSERT Colors (name) VALUES ('Red'),('Blue'),('Yellow'),('Violet');
  INSERT Sizes (abbr) VALUES ('XS'),('M'),('L'),('XL'),('XXL');
 
 
  This query:
  SELECT colors.name, sizes.abbr FROM Colors, Sizes;
 
  Returns:
  ++--+
  | name   | abbr |
  ++--+
  | Red| XS   |
  | Blue   | XS   |
  | Yellow | XS   |
  | Violet | XS   |
  | Red| M|
  | Blue   | M|
  | Yellow | M|
  | Violet | M|
  | Red| L|
  | Blue   | L|
  | Yellow | L|
  | Violet | L|
  | Red| XL   |
  | Blue   | XL   |
  | Yellow | XL   |
  | Violet | XL   |
  | Red| XXL  |
  | Blue   | XXL  |
  | Yellow | XXL  |
  | Violet | XXL  |
  ++--+
  20 rows in set (0.04 sec)
 
  Notice that every possible combination between color and size is
 listed?
  When you wrote your query, you also asked the query engine to create
 every
  possible combination between each customer in the first table and
 every
  customer in the second table. That resulted in 486,240 matches. Then
 the
  engine applied your WHERE condition to all of those matches and
 ELIMINATED
  of all of the rows where the email addresses were the SAME between
the
 two
  tables so you wound up with *only* 486,057 combinations of customers
  between the two tables where their addresses were different.
 
  I think what you wanted to find was all of the rows in one table
that
  didn't match any rows in the other table. You can do it with this
  statement:
 
  SELECT a.ID, a.First, a.Last, a.Email
  FROM producta_customers a
  LEFT JOIN productb_customers b
ON a.email=b.email
  WHERE b.id is null
 
  This will give you all of the records in producta_customers that DO
 NOT
  have a matching email address in the productb_customers table.
 
  Yours,
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
 
 
 
Aaron Wolski
[EMAIL PROTECTED]To:
  [EMAIL PROTECTED]
z.com   cc:
 Fax to:
07/09/2004 11:33 Subject:  anyone help
 with
  this query? Returning to many results
AM
 
 
 
 
 
 
  Hi all,
 
  Having a problem with a query that's returning 486,057 results when
it
  most definitely should NOT be doing that.
 
  I have two tables:
 
  1 for a list of customers that purchase product A, another for
 customers
  who purchased product B.
 
  Columns are:
 
  Id
  First
  Last
  Email
 
  I am trying to compare table 1 to table 2 to get a result set that
 gives
  me the contact info (table columns) for those whose email addresses
in
  table 1 DON'T EQUAL those in table two.
 
  In table one I have 2026 records
  In table two I have 240 records
 
  The query is this:
 
  SELECT * FROM producta_customers AS t1

Session wait_timeout and interactive_timeout variables

2004-07-07 Thread Aaron Jacobs
Do the following commands, when run from an already-established 
connection, actually have any bearing on anything?

SET SESSION wait_timeout=10;
SET SESSION interactive_timeout=10;
I am working on an application using the C API that needs to lock 
tables while doing updates.  This works fine except for in the rare 
case when the computer the application is running on loses connectivity 
while a table is locked.  Normally this wouldn't be a huge deal, but I 
am concerned because the application will mostly be running over a 
wireless connection which is of course subject to loss of signal.  So 
if one person running the application loses his signal, no one else 
will be able to do anything.

The problem I'm running into is this - I'm testing out this situation 
by running the mysql command-line client on one computer, write-locking 
a table, starting a select query from that table on another machine, 
and turning off the wireless to the first machine.  The second machine 
continues to hang as the mysql server doesn't kill the connection to 
the first machine and thus its lock is still in effect.

Now I believe that wait_timeout and/or interactive_timeout are what I 
need to help me out by killing such dead threads and thus setting a 
maximum time that others could hang.  This is all well and great, but I 
am not the admin of the server, so I can't set them globally and the 
default values of 8 hours are obviously unreasonable for me.  In the 
testing sequence described above, I tried running

SET SESSION wait_timeout=10;
SET SESSION interactive_timeout=10;
on the first machine before acquiring the lock and turning off 
wireless, but it seemed to have no effect as the second machine 
continued to hang for the seven minutes I let it sit.  After setting 
the session variables I tried checking them with a select statement, 
and they showed 10 seconds like they should.  It just seems like the 
server doesn't actually honor them.

Is there a problem with the server here, or am I just misunderstanding 
the way session variables are intended to work?  Or is the way I am 
simulating this case flawed?  I must say that the documentation is not 
very verbose about server variables and so wasn't extremely helpful to 
me in solving this problem.

If there's another better way to solve my locking issue, I would love 
to hear that as well.

Thanks,
Aaron Jacobs
[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


making lower case then first char to upper case?

2004-06-30 Thread Aaron Wolski
Hi Guys,
 
I'm trying to figure out of this is possible. I know I could do it in
PHP but I am dealing with a ton of records and would rather put the
processing on the DB than PHP/client side.
 
Question is. can I do a SELECT query on a column that changes all the
results to lower case and THEN changes the first character of each
result to an upper case?
 
Example:
 
Currently in DB: AARON
to Lowercase: aaron
to Uppercase: Aaron
 
 
Any idea on if I can do this and how I might approach it?
 
Thanks so much
 
Aaron
 


Tough Query Problem

2004-06-16 Thread Aaron Clausen
I am trying to write a script that can take logs from our mail server, boil down the 
rejections to determine the sources of distributed SMTP dictionary attacks against our 
mail server.

Basically I have a table send_failures like this that gets fed with the raw data from 
the logs:

host_ip date
---
111.111.111.110 2004-06-03 13:42:22
34.23.28.1  2004-06-03 13:42:25
65.2.88.25  2004-06-03 13:42:25
111.111.111.110 2004-06-03 13:42:27
65.2.88.25  2004-06-03 13:42:29
64.251.68.722004-06-03 13:42:30

And so on and so forth.

Now it's trivial to write a query to find the pure counting of the attacks:

SELECT host_ip, COUNT(host_ip) AS attempts FROM send_failures GROUP BY host_ip;

However, I also want to have latest date of the attack included as well, so that the 
above exampe would boil down to a query with results like this (I'm running MySQL 
3.23.58):

host_ip attemptslast_attempt_date
---
111.111.111.110 2   2004-06-03 13:42:27
65.2.88.25  2   2004-06-03 13:42:29
64.251.68.721   2004-06-03 13:42:30
65.2.88.25  1   2004-06-03 13:42:25
34.23.28.1  1   2004-06-03 13:42:25

Obviously the actual table is going to have hundreds of thousands of entries (the log 
file for a couple of days ago had 1.2 million rejections).

-- 
A. Clausen
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



AS error with version 4.0.18-standard

2004-05-22 Thread Aaron Wolski
Hi all,

Have a query that used to work on one server but then MYSQL version was
upgraded and now it no longer works. Here's the query:

SELECT *, SUM(quantity)*100/ AS percentage FROM CartTable WHERE company
='1' AND submitted='1' AND dateinserted='946702800' AND
dateinserted='1085247047' AND product_index='148' GROUP BY
product_index,colour,size ORDER BY percentage DESC LIMIT 1


This is the error I get:

#1064 - You have an error in your SQL syntax.  Check the manual that
corresponds to your MySQL server version for the right syntax to use
near 'AS percentage FROM CartTable WHERE company ='1' AND submitted='


Any clue why this would no longer work?

Thanks!

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sql syntax error

2004-04-20 Thread Aaron P. Martinez
I'm using amavisd-new -20030616p9, RH 3.0 ES and mysql 3.23.58-1 trying
to do sql lookups for user prefs.  I've done this before and have
compared my sql statements and can't figure out the problem. 

When i start amavisd-new with the debug switch, here's what i get:

# /usr/local/sbin/amavisd debug
Error in config file /etc/amavisd.conf: syntax error at
/etc/amavisd.conf line 829, near ' ORDER BY users.priority DESC ';

Here are the lines from my /etc/amavisd.conf file:

$sql_select_policy = 'SELECT *,users.vuid FROM users,policy_names'.
  ' WHERE (users.policy=policy_names.id) AND (users.username IN (%k))'.
  ' ORDER BY users.priority DESC ';

Please help!

Thanks in advance,

Aaron


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Administrative limitation

2004-04-02 Thread Aaron Wolski
Yes.

Or find a host that will give you ability for multiple databases.

HTH

Aaron

 -Original Message-
 From: David Fleming [mailto:[EMAIL PROTECTED]
 Sent: April 2, 2004 10:30 AM
 To: [EMAIL PROTECTED]
 Subject: Administrative limitation
 
 (newbie question)
 
 Our web host provides one (1) MySQL database for our account, and
 phpmyadmin as the admin interface. We don't have privileges to create
 a new database. We would like to install 2 or 3 unrelated applications
 that will utilize the database (forum, image gallery, product
 catalog).
 
 Is the fact that we can't create a separate database for each
 application going to be a problem? As long as there's no duplication
 of table names, can multiple applications safely use a single
 database?
 
 Thanks for the help.
 
 --
 David Fleming
 [EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



@@identity

2004-03-10 Thread Aaron
How can I select the last record that was inserted? An ASP/VB example would
be great too!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: @@identity

2004-03-10 Thread Aaron
Great - thanks! 

-Original Message-
From: Victor Pendleton [mailto:[EMAIL PROTECTED] 
Sent: March 10, 2004 5:36 PM
To: 'Aaron '; '[EMAIL PROTECTED] '
Subject: RE: @@identity

Rough example. Assuming that you are using the same connection since
last_insert_id() is connection specific.

rset.open INSERT INTO table1 values(someValue) 
rset.open SELECT last_insert_id() as identity
identity = rset.fields(identity)


-Original Message-
From: Aaron
To: [EMAIL PROTECTED]
Sent: 3/10/04 4:04 PM
Subject: @@identity

How can I select the last record that was inserted? An ASP/VB example would
be great too!


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Please help with check syntax

2004-01-29 Thread Aaron P. Martinez
I am trying to set up a table from a script that came with some software
Value accounting/CRM and i'm getting a few errors, one of which i
can't seem to figure out/fix.  

My system is RH 3.0 ES with mysql-server-3.23.58-1.  I have innodb
tables configured with the following statement in my /etc/my.cnf:
innodb_data_file_path = ibdata1:10M:autoextend
set-variable = innodb_buffer_pool_size=70M
set-variable = innodb_additional_mem_pool_size=10M
set-variable = innodb_log_file_size=20M
set-variable = innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1


The create table statement is below followed by the error.  

create table ItemSalesTax (
 STYPE  integer not null primary key,   /* STax Type */
 SDESC  varchar(35),
 SPERC  numeric(13,4) zerofill not null /* Percentage */
check(SPERC = 0),
 SCONUM integer not null,
 SYRNO  integer not null,
 SLEVEL varchar(4) not null, /* Access Control Level */
 constraint staxlevel_ck
check (SLEVEL in ('READ','RW','DENY')),
 constraint STax_fk foreign key (SCONUM, SYRNO)
references AccYear(ACONUM, AYEARNO)
);



ERROR 1064: You have an error in your SQL syntax near 'check(SPERC =
0),
 SCONUM integer not null,
 SYRNO  i' at line 9

I am not great w/mysql but gradually learning.  I have looked in the
online manual and can't find anything that helps.
I would really like to get this going as soon as possible to
evaluate...any and all help is GREATLY appreciated.

Thanks,

Aaron Martinez


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Does the MySQL packet limitation still exist?

2004-01-23 Thread Aaron P. Lopez
Hello,

 Does the packet limitation of 16MB still exist for the client/server
protocol in MySQL?

I am trying to upload files greater than 16MB from a php/apache interface,
with no success. Files  15MB are saved just fine into the database. The
datatype on the field is longblob. From the mysql cli I can load files
into the database that are greater than 16MB with no problem. I am running
MySQL on a redhat 7.3 box with the following MySQL rpms installed:
MySQL-server-4.0.17-0
MySQL-client-4.0.17-0
MySQL-Max-4.0.17-0.
I am running apache 1.3.29 and php 4.3.2 installed.
The server has 128MB of RAM


MySQL is starting from /etc/init.d/mysql with the following in the startup
script:
$bindir/mysqld_safe --datadir=$datadir --pid-file=$pid_file
--max_allowed_packet=64M -O key_buffer=192M -O table_cache=128
--log_bin=peerreview-binary-update-log --max_binlog_size=256M 

When running: `mysqladmin VARIABLES -h hostname -p|grep allowed`
I get:
max_allowed_packet   67107840

The maximum post size in my php.ini file is: 32MB
The maximum uploadable file size is: 32MB

According to http://www.mysql.com/doc/en/Packet_too_large.html
this limit should be raised beyond 16MB with MySQL 4.01+.

At the very least I should be able to save a file of size 32MB from the
php interface, but I cannot.
What else would be causing this limitation?



Cheers,
Aaron






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Can't connect to MySQL server

2004-01-10 Thread Aaron Gerber
I'm new to the list, hope this is the correct mailing list.

I have a new installation of MySQL 4.0.16. Default binary installation 
(apt-get).  It's running on linux (morphix .4).

Locally, I can connect to the MySQL server
 mysql -h localhost -p
I can also connect through phpmyadmin (remotely or locally).
http://xxx.168.xxx.xxx/phpmyadmin/
but I can't connect if I put in the IP address (locally or remotely)
 mysql -h xxx.168.xxx.xxx -p
password:
ERROR 2003: Can't connect to MySQL server on 'xxx.168.xxx.xxx' (22) --- 
locally
ERROR 2003: Can't connect to MySQL server on 'xxx.168.xxx.xxx' (61) --  
remotely

I've looked all through the documentation from MySQL and the archives 
on the web.  From looking at the docs, it seems that my problem either 
lies in how the user accounts are set up for how the server is set up.  
My guess is the user accounts.  I've set up the user to be able to 
connect from anywhere.  Summary(user: xxx, host: %, password: Yes, 
privileges: ALL PRIVILEGES , grant: Yes).  I used phpmyadmin to set up 
the accounts.

Thanks in advance,

Aaron-

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Can't connect to MySQL server

2004-01-10 Thread Aaron Gerber
Roger and Johannes,

The default did have the skip-networking on.  I commented that out as 
you advised, and it worked like a Charm!!  I'm pretty excited to be 
able to have that working now.  I really really appreciate the prompt 
reply and the consideration that you both took in your replies.

Thanks again.

Aaron-

On Jan 10, 2004, at 3:20 PM, Johannes Franken wrote:

* Aaron Gerber [EMAIL PROTECTED] [2004-01-10 21:33 +0100]:
Locally, I can connect to the MySQL server [...]
but I can't connect if I put in the IP address (locally or remotely)
Maybe mysqld is configured to listen on its unix domain socket only?
This is the default for Debian.
Remove the line 'skip-networking' from /etc/mysql/my.cnf,
type /etc/init.d/mysql restart and
try connecting to the IP address again.
On Jan 10, 2004, at 3:30 PM, Roger Baklund wrote:
Note that there are two ways to connect to the mysql server: using 
sockets
(or named  pipes on windows) or using TCP/IP. If you provide -h
hostname-or-ip-address a TCP/IP connection is used, if you provide -h
localhost or no -h parameter, a unix socket is used.

You are not running the server with --skip-networking, are you? If 
you
are, TCP/IP support is disabled. Have you configured the server to use 
a
non-standard port (different from 3306)? In that case, you must 
provide port
number when starting the client:

mysql -h xxx.168.xxx.xxx --port=3307 -u xxx -p


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Rollback

2004-01-02 Thread Aaron Wohl
You can add a version field to each row.  Then add a seperate table with
info with a list of the versions and a flag for deleted.  Queries would
look for each record that has the highest version number thats not
deleted. Having a lot undo/redo info can get kind of complicated,
especialy with multiple end users playing with it and chains of dependant
changes.

If the info can be modeled as documents this is frequenetly done with
CVS.  

On Fri, 2 Jan 2004 13:06:36 +0530, karthikeyan.balasubramanian
[EMAIL PROTECTED] said:
 Hi,
 
   I posted this question in MySQL mailing list and got no reply.
 
 The basic problem is that I have committed the transaction and then
 replicated to another DB. Now I want to rollback the committed
 transaction.
 Is there a way to rollback to a particular point. This requirement is
 very
 similar to rolling back using save points. I guess an option would be to
 backup database before changes and restore it if the user is not
 satisfied
 with the changes he has made. One transaction in my application would
 affect
 6-8 tables with at least 50 - 100 records getting inserted/updated or
 deleted.
 
 Please advice
 
 PS : Wish you all a very Happy New Year
 
 Karthikeyan B
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



VC++ and mysql and openssl

2003-12-27 Thread Aaron Hagan
Hello there,
I am having a bit of trouble to get a ssl enabled client working on windows. For 
testing purposes i was trying to compile mysqldump useing ssl. it compiles and loads 
okay but right after i enter the password i get a debug assertion failed in file 
dbgheap.c on line 1044. (the server its talking to is a mysqlsql version 4.0.13 on 
unix)

i am using openssl version openssl-0.9.7a and mysql-4.0.17.  Has anyone else seen this 
type of error before?

thanks for the help!

Aaron 

 
__ __ __ __



 
   

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



View SSL connections ?

2003-12-24 Thread Aaron Hagan
Is there anyway to see what connection threads are using ssl and which ones are not ?

ex: when i type 
mysql status;
one of the returned lines is 
SSL:Cipher in use is DHE-RSA-AES256-SHA

is there any way to see the status of other treads like this?

or is there anyway the  'show processlist;' command can display what threads are 
encrypted and which ones aren't?

thanks
aaron 

 
__ __ __ __



 
   

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Database Attributes

2003-11-26 Thread Aaron Efland
Hello,

I was wondering if someone could provide a brief description for both of
the following attributes within the 'db' table:

'create_tmp_table_priv' and 'References_priv'

Any information you can provide will be helpful.


Thanks,

Aaron



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migration to INODB?

2003-11-21 Thread Aaron Wohl
Run all your tables thru this.  Save the script as xinno_convert.  As in
xinno_convert  foo.sql | mysql

Suposedly you can also alter a table to innodb.  But that never worked
for me it just looped using cpu for days.  So I made this script which
worked fine.  Other than converting implicit locking is different.  With
innodb it assumes your using transactions.


#!/usr/bin/perl

#convert all tables in a mysql dump to innodb
use strict;
while() {
  my $aline=$_;
  if($aline=~/(\) TYPE=)[A-Za-z]*ISAM(.*)$/) {
$aline=$1INNODB;\n;
# ) TYPE=MyISAM COMMENT='Users and global privileges';
  }
  print $aline;
}

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



upgraded, now nulls act differently

2003-10-03 Thread Aaron P. Martinez
I am running RH 2.1 ES, i recently upgraded to 3.23.56-1.72.  I am using
mysql as a backend for postfix MTA.  I use webmin to add new users to my
database.  Today, after upgrading, there was a distinct problem.  I
added a user, left a field blank, it was the relocate_to feild, which
unless i put something there, i expect the field to be treated as null. 
Unfortunately when postfix checked the database, the new user had a
relocated_to value, even though i left the field blank.  The odd thing
is..when i upgraded mysql, the null must have stuck from the upgrade,
because the problem only presents itself when i add NEW users to the
database.

Any help as to how i can correct this?

Thanks in advance,

Aaron Martinez


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



IP Address Operations in MySQL

2003-09-23 Thread Aaron Clausen
I'm running 3.23.49 on a Win2k machine and wonder if anybody has some advice on how to 
write queries dealing with IP addresses.  Specifically, I have a table with an ip 
address field.  I want to be able to do queries based on subnet.  Has anybody got 
anyting like this?

--
Aaron Clausen

[EMAIL PROTECTED] or [EMAIL PROTECTED]

--

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Help debugging this Query?

2003-08-14 Thread Aaron Wolski
Hi Andy,

Thanks for your comments and my apologies on the late reply.

To optimize my queries I restructured the tables - adding another table
into the mix.

Here is my table structure:

CREATE TABLE kcs_thread_types ( 
   id int(11) NOT NULL auto_increment, 
   typeName varchar(255), 
   PRIMARY KEY (id), 
   UNIQUE typeName (typeName) 
); 


CREATE TABLE kcs_threads ( 
   id int(11) NOT NULL auto_increment, 
   dateinserted timestamp(14), 
   manufacturer varchar(255), 
   type_index int(11), 
   newUrlType varchar(255), 
   colour varchar(255), 
   colourID varchar(255), 
   price decimal(8,2), 
   image varchar(255), 
   PRIMARY KEY (id) 
); 

CREATE TABLE kcs_threadgroups ( 
   id int(11) NOT NULL auto_increment, 
   groupName varchar(255), 
   groupNameUrl varchar(255), 
   type_index int(11), 
   thread_index varchar(255), 
   PRIMARY KEY (id) 
);

The query I am now using is:

SELECT * FROM kcs_threads as t1 LEFT JOIN kcs_threadgroups as t2 ON
t1.type_index=t2.type_index LEFT JOIN kcs_category_threads as t3 ON
t2.type_index=t3.id WHERE t1.manufacturer='DMC' ORDER BY
t1.type,t2.groupName;


When I do an explain on the query I get:

+---++---+-+-+---+--
++
| table | type   | possible_keys | key | key_len | ref   |
rows | Extra  |
+---++---+-+-+---+--
++
| t1| ALL| NULL  | NULL|NULL | NULL  |
2067 | where used |
| t2| ALL| NULL  | NULL|NULL | NULL  |
286 ||
| t3| eq_ref | PRIMARY   | PRIMARY |   4 | t2.type_index |
1 ||
+---++---+-+-+---+--
++

I am still getting the output I had before in the 77,000+ results being
returned.

I am at wits end here and don't know where else to look *shrugs*

ANY clues? Do you want to see some of the table data?

Thanks

Aaron

 -Original Message-
 From: Andy Jackman [mailto:[EMAIL PROTECTED]
 Sent: August 11, 2003 2:21 PM
 To: Aaron Wolski
 Cc: [EMAIL PROTECTED]
 Subject: Re: Help debugging this Query?
 
 Aaron,
 It sounds like the join has duplicates on both sides. If you join a-b
 where a is unique (like a primary key) then you will get count(b)
 records (where b is the rows that match a). Or if b is unique then you
 will get count(a) records. However if neither a or b is unique you get
 count(a) * count(b) records. Without you tabledefs it is difficult to
 see if this is the case.
 
 Try this:
 select count(*) as xx from kcs_threadgroups group by threadType having
 xx  1;
 If you get a result then you have duplicates on threadType
 
 select count(*) as xx from kcs_threads where manufacturer='DMC' group
by
 type having xx  1;
 If you get a result then you have duplicates on Type for manufacturer
 DMC.
 
 If you have dups for both then you are getting the result you are
asking
 for.
 
 If this doesn't help, please publish your tabledefs. It's ok to
simplify
 them so we don't have to wade through tons of stuff that has nothing
to
 do with the problem.
 
 Regards,
 Andy.
 
 Aaron Wolski wrote:
 
  Hi Guys,
 
  I have 2 queries:
 
  select * FROM kcs_threadgroups AS t1, kcs_threads AS t2 WHERE
  t2.manufacturer='DMC' AND t1.thread_index=t2.id ORDER BY t2.type,
  t1.groupName
 
  Using the explain command (explain select.) I get:
 
 
+---++---+-+-+-+
  --+-+
  | table | type   | possible_keys | key | key_len | ref
|
  rows | Extra   |
 
+---++---+-+-+-+
  --+-+
  | t1| ALL| NULL  | NULL|NULL | NULL
|
  286 | Using temporary; Using filesort |
  | t2| eq_ref | PRIMARY   | PRIMARY |   4 |
t1.thread_index |
  1 | where used  |
 
+---++---+-+-+-+
  --+-+
 
 
 
  select * FROM kcs_threadgroups AS t1, kcs_threads AS t2 WHERE
  t2.manufacturer='DMC' AND t1.threadType=t2.type ORDER BY t2.type,
  t1.groupName
 
  Using the explain command I get:
 
 
+---+--+---+--+-+--+--+-
  +
  | table | type | possible_keys | key  | key_len | ref  | rows |
Extra
  |
 
+---+--+---+--+-+--+--+-
  +
  | t1| ALL  | NULL  | NULL |NULL | NULL |  286 |
Using
  temporary; Using filesort |
  | t2| ALL  | NULL  | NULL |NULL | NULL | 2067 |
where
  used  |
 
+---+--+---+--+-+--+--+-
  +
 
 
  With the second query, I am getting over 77,000 results returned

Help debugging this Query?

2003-08-14 Thread Aaron Wolski
Hi Guys,
 
I have 2 queries:
 
select * FROM kcs_threadgroups AS t1, kcs_threads AS t2 WHERE
t2.manufacturer='DMC' AND t1.thread_index=t2.id ORDER BY t2.type,
t1.groupName
 
Using the explain command (explain select.) I get:
 
+---++---+-+-+-+
--+-+
| table | type   | possible_keys | key | key_len | ref |
rows | Extra   |
+---++---+-+-+-+
--+-+
| t1| ALL| NULL  | NULL|NULL | NULL|
286 | Using temporary; Using filesort |
| t2| eq_ref | PRIMARY   | PRIMARY |   4 | t1.thread_index |
1 | where used  |
+---++---+-+-+-+
--+-+
 
 
 
select * FROM kcs_threadgroups AS t1, kcs_threads AS t2 WHERE
t2.manufacturer='DMC' AND t1.threadType=t2.type ORDER BY t2.type,
t1.groupName
 
Using the explain command I get:
 
+---+--+---+--+-+--+--+-
+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+-
+
| t1| ALL  | NULL  | NULL |NULL | NULL |  286 | Using
temporary; Using filesort |
| t2| ALL  | NULL  | NULL |NULL | NULL | 2067 | where
used  |
+---+--+---+--+-+--+--+-
+
 
 
With the second query, I am getting over 77,000 results returned and
with the first query I am getting 2067 - the correct amount.
 
The only differences between the 2 is that in the first I have  WHERE
clause t1.thread_index=t2.id and in the second I have
t1.threadType=t2.type
 
I need to have the WHERE like the second query but for the life of my I
have no clue why it is returning over 77,000 results.
 
Can anyone help guide me to get the results of #1 but with the WHERE of
#2?
 
Thanks so much!
 
Aaron
 


Re: MySQL field data type for ISBN numbers

2003-08-10 Thread Aaron Holmes
Perhaps just something trivial but both numbers are differing:

 SELECT * FROM book_details WHERE ISBN = '1---1'

and

and I've put a test ISBN number in of 1--111-11

In any case I have ran a test on my servers with mysql V 4.0.13 and things
work accordingly.

Thanks,

Aaron Holmes
[EMAIL PROTECTED]
CEO Gurix Web Professionals
www.gurix.com
- Original Message - 
From: James Johnson [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, August 10, 2003 8:58 PM
Subject: RE: MySQL field data type for ISBN numbers


 Hi Daniel,

 Here's what is echoed back. It looks valid to me.

 SELECT * FROM book_details WHERE ISBN = '1---1'

 James
 -Original Message-
 From: Daniel Kasak [mailto:[EMAIL PROTECTED]
 Sent: Sunday, August 10, 2003 5:40 PM
 To: James Johnson; [EMAIL PROTECTED]
 Subject: Re: MySQL field data type for ISBN numbers


 James Johnson wrote:

 Hi,
 
 I have a MySQL database for books. The ISBN field is set as varchar(15)
 and I've put a test ISBN number in of 1--111-11. Can someone tell
 me why this SQL query isn't working?
 
 $ISBN = $_GET['isbn'];
 $query = SELECT * FROM book_details WHERE ISBN = '$ISBN';
 
 Since the field is a varchar, should wrapping the variable
 (1--111-11) in single quotes work?
 
 If I change the data in the table to 'abcd' and run this query
 
 $ISBN = 'abcd';
 $query = SELECT * FROM book_details WHERE ISBN = '$ISBN';
 
 It works.
 
 PHP Version 4.3.2
 mysql-4.0.14b
 
 Thanks,
 James
 
 
 
 
 Try 'echo'ing $query to your browser and then copying the query from
 your browser into the mysql client and see if it works.
 I suspect there may be something wrong with the data after the $ISBN =
 $_GET['isbn'] bit, and echoing the query might make it more obvious what
 the problem is.

 -- 
 Daniel Kasak
 IT Developer
 * NUS Consulting Group*
 Level 18, 168 Walker Street
 North Sydney, NSW, Australia 2060
 T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
 email: [EMAIL PROTECTED]
 website: http://www.nusconsulting.com



 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Changing ENUM Values

2003-07-17 Thread Aaron Blew
So I've got fairly big sized table (20,000 records) which has an ENUM
row.  I need to change the name of one of the ENUM options.  For
example, if the ENUM value was 'blue' before, it needs to be 'purple'
now.  What's the best way to do this?

Thanks,
-Aaron


---
Aaron Blew
Jackson County School District #6 Network/Systems Analyst
[EMAIL PROTECTED]
(541)494-6900

You can destroy your now by worrying about tomorrow.
-- Janis Joplin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Can't get MySQL to log queries.

2003-06-06 Thread JACKSON,AARON (HP-Boise,ex1)
I need to be able to examine the queries being sent to MySQL.  After reading
the documentation I created /etc/my.cnf and added a line for the log option.
Unfortunately, I have been unable to get it to log the queries.  
 
Is there another option that must be set?
 
I have the following in /etc/my.cnf
 
[mysqld]
log=/var/log/mysql/mysql_query.log
[mysql.server]
log=/var/log/mysql/mysql_query.log
 
This is what /var/log/mysql/mysql_query.log looks like.  
 
/usr/sbin/mysqld, Version: 4.0.13-log, started with:
Tcp port: 3306  Unix socket: /var/lib/mysql/mysql.sock
Time Id CommandArgument
 
I am using MySQL 4.0.13 on Redhat 7.3.
Aaron
 
 


can you do this? mysql backend question

2003-06-03 Thread Aaron P. Martinez
i have redhat e. v2.1 mysql v. 3.23.54a-3.72, postfix 2.0.10, courier-1.7.3
and  amavisd-new (newest version) and i'm successfully running mysql as the
backend for both.  My question is this:

all of the data for the above programs is all stored in a bunch of different
tables.  aliases_this, virtuals_that, vgid_another..etc...  What i want to
do, is create one table, for originalities sake lets call it master.  In
master, i want to have all of the fileds that are used in all of my lookup
tables for postfix, corier and amavis as well as some 'non-lookup' customer
contact info.  I'm wondering if there is a way to make it, so that my lookup
tables pull their data from the master table and update dynamically when
the master has been modified.

I'll admidt now, i did look through the manual and did a search on the
website, but i was far from exhaustive in my search i'm sure.  I'm not
looking for someone to spell this out, rather just perhaps a pointer to a
particular portion of some document that might explain this.

Thanks in advance

Aaron Martinez


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Choosing a column for primary key

2003-03-27 Thread Aaron Williams
I'd like to know about any potential issues of choosing a char(32) as
the column type of a primary key instead of BIGINT or some other numeric
type.  A little background.

I would like to generate a unique identifier to use as a primary key for
our application instead of using an AUTOINCREMENT column.  This is to
give us database independence.  I've seen several algorithms that will
generate a GUID based on timestamp, machine IP, etc and return a 32 byte
string that is guaranteed unique.  However, there are concerns that
joins using this key versus a large integer would cause performance
problems as the table grows.

Would joins of tables with character based primary keys be slower than
those with numeric based keys?  Has anyone had experience implementing a
character-based primary key in a table of non-trivial size ( 500,000
rows)?  Thanks for any assistance or pointers.

Aaron
[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Different value from same function

2003-02-18 Thread Aaron Conaway
Weird problem here running 3.23.55-Max.  I'm (still) trying to get my IP
address DB working, but have run into a problem.  

When I use the inet_aton function during an insert, I get a very
different value than if I use the function by itself on the CLI.  Please
see below.  

The IP address I am trying to use is 172.20.20.2.  When I run select
inet_aton(172.20.20.2) on the CLI, I get 2886996994, which is the
correct value; when I run the same function during an insert, I get
2147483647, which is not correct.

Any thoughts on this one?  I'm stumped.

mysql describe host;
++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| name   | varchar(20) |  | PRI | |   |
| ip | int(4)  |  | | 0   |   |
| admin  | varchar(30) |  | | |   |
| subnet | varchar(20) |  | | |   |
++-+--+-+-+---+
4 rows in set (0.01 sec)

mysql insert into host values (
- Test1,
- inet_aton(172.20.20.2),
- Pixies,
- Elive
- );
Query OK, 1 row affected (0.00 sec)

mysql select * from host;
++++--+
| name   | ip | admin  | subnet   |
++++--+
| Test1  | 2147483647 | admin1 | sub1 |
++++--+
1 row in set (0.01 sec)

mysql select inet_ntoa(2147483647);
+---+
| inet_ntoa(2147483647) |
+---+
| 127.255.255.255   |
+---+
1 row in set (0.00 sec)

mysql select inet_aton(172.20.20.2);
+--+
| inet_aton(172.20.20.2) |
+--+
|   2886996994 |
+--+
1 row in set (0.00 sec)



-- 
Aaron Conaway 
Network Engineer III 
Verisign, Inc. -- Telecom Services Division 
http://www.verisign.com 
Office:  912.527.4343 
Fax:  912.527.4014 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication Problems with Adding New Records

2003-02-13 Thread Aaron Weed
I am using MySQL version 4.0.10 on both Slave and Master.  I am running a
Visual Basic front end connected to MySQL via MySQL ODBC v3.51.04.  When
using MySQLCC, adding records to the master database will work fine, the
server shows the additional record added.  When using our app in VB, a
record will be added on the Master, but the Slave does not pick up the added
record.  The log-bin file shows there is something recorded, but no error
message or no change! If anyone has any input, that would be great... If you
need or want further explaining please email, I will be glad to help...
Thanks...

 Aaron


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




IP Addresses -- How to Store

2003-02-11 Thread Aaron Conaway
I'm looking to develop a database of IP addresses for the company and
have, of course, chosen mySQL as the backend.  I want the app to add
(remove, etc.) a host, giving its hostname and segment.  The app will
add the next available address to the database and, looking at the
segment, provide the subnet mask and default gateway for said new host.
I've got the db structure written out, but I'm having issues on how to
store the data like address, subnet mask, default gateway.

Our network is very diverse, covering many ranges of addresses and, more
importantly, many subnet masks.  Some are 24bit, some are 16bit, and
some are 23bit.  What is the best way to store this data in mySQL?

If I do a varchar, then how do I restrict the data to between 0 and 255?
Also, how would I manipulate any address with a classless mask?  I'm
thinking storage as binary so I can parse out a byte of binary data and
present it as a decimal number; this also limits the data to numbers
between 0 and 255, based on an 8-bit byte.  The problem is that I have
no clue how to store such.

I'm running around in circles on this one.  Can some point me to a
resource that can shed some light on this type of data storage?

-- 
Aaron Conaway 
Network Engineer III 
Verisign, Inc. -- Telecom Services Division 
http://www.verisign.com 
Office:  912.527.4343 
Fax:  912.527.4014 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: IP Addresses -- How to Store

2003-02-11 Thread Aaron Conaway
Thanks to Peter, Ravi, and Dan.  That's exactly what I needed to know.

--
Aaron Conaway
Network Engineer III
Verisign, Inc. -- Telecom Services Division
http://www.verisign.com
Office:  912.527.4343
Fax:  912.527.4014


-Original Message-
From: Peter Hicks [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, February 11, 2003 1:30 PM
To: Dan Nelson
Cc: Aaron Conaway; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: IP Addresses -- How to Store


On Tue, 11 Feb 2003, Dan Nelson wrote:

 Store your addresses as INTs, so you would have three fields: 
 address, netmask, and gateway.  You can either encode the values

 yourself, or use mysql's INET_NTOA()/INET_ATON() functions.

...and beware, INET_NTOA/ATON calls aren't compatible (as far as I can
see) with PHP's equivilent calls.  PHP uses signed integers, and MySQL
not.

Has anyone come up with a workaround to this on either the MySQL or PHP
sides?


Peter.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: bug submitted (non-unique indicies in heap tables do not work correctly)

2003-01-09 Thread Aaron Krowne
Cute =)

(MySQL sql query queries smallint =)

Aaron Krowne

On Thu, Jan 09, 2003 at 04:36:12PM +0100, [EMAIL PROTECTED] wrote:
 
 Your message cannot be posted because it appears to be either spam or
 simply off topic to our filter. To bypass the filter you must include
 one of the following words in your message:
 
 sql,query,queries,smallint
 
 If you just reply to this message, and include the entire text of it in the
 reply, your reply will go through. However, you should
 first review the text of the message to make sure it has something to do
 with MySQL. Just typing the word MySQL once will be sufficient, for example.
 
 You have written the following:
 
 Hi,
 
 I just submitted a bug with the parenthesized title, but I don't think
 my from address was correct (akrowne instead of [EMAIL PROTECTED]), so I
 am posting this message to connect myself with the bug report.  Did this
 report make it in?  Thanks,
 
 Aaron Krowne
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Backups mechanism

2003-01-08 Thread Aaron Clausen
On Tue, 7 Jan 2003 [EMAIL PROTECTED] wrote:


  I use mysqldump to do a total backup of my database.  This is done once a
  day just after midnight.  The file is then taken up by our tape backup
  software.  I do hourly backups of our more important databases, also using
  mysqlbackup.  Works fine and I have used it for restorals on a number of
  occasions.  Even better, mysqlbackup generates SQL files, which, with a
  little manipulation, can be imported into other database systems.

  Doesn't mysqldump also create sql files or at least create the commands
 to rebuild what it is dumping?

Using default options, it creates a SQL script that will create the table
structures and reload the data.  You can use the mysql command to
repopulate the database.  I'm in constant development of an in-house
accounting system, and to test out bug fixes, new features, etc., I'll just
dump the running database and pipe it into a test database.  I found
mysqldump to have a bit of a learning curve, but I could not survive without
it now.

-- 
Aaron Clausen


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Backups mechanism

2003-01-07 Thread Aaron Clausen
On Tue, 7 Jan 2003, Jonas Widarsson wrote:

   Hello world!
 Every attempt I have made to find a decent way of backing up a database
 ends up with tons of reading to show hundreds of ways to do database
 backups.

 I want to know which way to do complete backups is most commonly used by
 professional users. (Windows and Linux)

 I have pleasant experience from Microsoft SQL Server 2000 where backung
 up is very confident, easy to understand and just a matter of a mouse click.

 Is there any similarily convenient way to do this with mysql, for example:
 * a shell script (windows / Linux) that works as expected without days
 of configuration?
 * scheduled backups, complete or differential?
 * GUI-solution ??? (mysql control center does not even mention the word
 backup)

 Backup is such an important issue. Why does it seem like it is something
 that the developers don't care about?

I use mysqldump to do a total backup of my database.  This is done once a
day just after midnight.  The file is then taken up by our tape backup
software.  I do hourly backups of our more important databases, also using
mysqlbackup.  Works fine and I have used it for restorals on a number of
occasions.  Even better, mysqlbackup generates SQL files, which, with a
little manipulation, can be imported into other database systems.

-- 
Aaron Clausen


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




filters

2003-01-02 Thread Aaron Scribner
Ok,

you guys are going to really hate me for this.  I just signed up for this 
list and I am asking st00pid questions already.

I am using Eudora and trying to filter these messages.  I set it up to 
monitor the To: field, however this is not doing anything.  I also 
noticed that there are no tags in the Subject lines of the messages sent to 
the list, that is what I usually use to filter messages.

Does anyone have any tips for filtering these messages using Eudora?  I 
just sub'd to 7 different MySQL lists and cannot get any of my filters to 
work right.  and sorry for the WOB.

- Aaron


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Error 2013: Lost connection to MySQL server

2002-12-19 Thread Aaron J. Martin
mysql client connectiosn work fine on localhost, but connections 
from other hosts on local area network fail with the following error:

ERROR 2013: Lost connection to MySQL server during query

This is an existing installation.  I had mysql-3.23.33 working 
and was trying to upgrade to a more current version.  I tried
mysql-3.23.53 and a few days later mysql-3.23.54a (this try).

I was able to go back to mysql-3.23.33, which still works.  
I just did a make install in the old build directory that I
still had online.
I may try recompiling 3.23.33 to try to verify that its not a 
problem related to compile time changes.  I don't think it is,
but its possible due to the time span between the working and
non-working builds.

I noticed that the configure line listed below doesn't match 
the actual configure line I used.  This may be normal, but 
it caught my eye.

I used:
   ./configure --prefix=/local --with-libwrap=/local \
--with-charset=usa7 --with-mysqld-user=mysql \
--localstatedir=/local/dbdata



Submitter-Id:  submitter ID
Originator:   Aaron Martin
Organization: Institute for Crustal Studies, UCSB
  ---
   Aaron J. Martin(805) 893-8415   voice and message
   Institute for Crustal Studies  (805) 893-8649   FAX
   Girvetz 1140E  (805) 448-4120   SCEC Cellular 
   UC Santa Barbara   [EMAIL PROTECTED] 
   Santa Barbara, CA 93106http://www.crustal.ucsb.edu/~aaron
  
   PBIC Lab1252 Arts  (805) 893-3758   voice
  ---

MySQL support: none
Synopsis:  Error 2013: Lost connection to MySQL server
Severity:  serious
Priority:  medium
Category:  mysql
Class: sw-bug
Release:   mysql-3.23.54 (Source distribution)

Environment:

System: SunOS fablio 5.8 Generic_108528-17 sun4u sparc SUNW,Ultra-5_10
Architecture: sun4

Some paths:  /bin/perl /usr/ccs/bin/make /net/quake/opt/bin/gmake /local/bin/gcc 
/local/apps/SUNWspro/bin/cc
GCC: Reading specs from /local/lib/gcc-lib/sparc-sun-solaris2.8/3.0.4/specs
Configured with: /local/apps/gcc-3.0.4/configure --prefix=/local
Thread model: posix
gcc version 3.0.4
Compilation info: CC='gcc'  CFLAGS='-Wimplicit -Wreturn-type -Wswitch -Wtrigraphs 
-Wcomment -W -Wchar-subscripts -Wformat -Wparentheses -Wsign-compare -Wwrite-strings 
-Wunused -mcpu=pentiumpro -O3 -fno-omit-frame-pointer'  CXX='gcc'  
CXXFLAGS='-Wimplicit -Wreturn-type -Wswitch -Wtrigraphs -Wcomment -W -Wchar-subscripts 
-Wformat -Wparentheses -Wsign-compare -Wwrite-strings -Woverloaded-virtual 
-Wsign-promo -Wreorder -Wctor-dtor-privacy -Wnon-virtual-dtor -felide-constructors 
-fno-exceptions -fno-rtti -mcpu=pentiumpro -O3 -fno-omit-frame-pointer'  LDFLAGS=''
LIBC: 
-rw-r--r--   1 root bin  1788196 Dec  4 15:03 /lib/libc.a
lrwxrwxrwx   1 root root  11 Oct 24  2000 /lib/libc.so - ./libc.so.1
-rwxr-xr-x   1 root bin  1157872 Dec  4 15:03 /lib/libc.so.1
-rw-r--r--   1 root bin  1788196 Dec  4 15:03 /usr/lib/libc.a
lrwxrwxrwx   1 root root  11 Oct 24  2000 /usr/lib/libc.so - ./libc.so.1
-rwxr-xr-x   1 root bin  1157872 Dec  4 15:03 /usr/lib/libc.so.1
Configure command: ./configure '--prefix=/usr/local/mysql' '--enable-assembler' 
'--with-extra-charsets=complex' '--enable-thread-safe-client' '--with-innodb' 
'--with-berkeley-db' 'CFLAGS=-Wimplicit -Wreturn-type -Wswitch -Wtrigraphs -Wcomment 
-W -Wchar-subscripts -Wformat -Wparentheses -Wsign-compare -Wwrite-strings -Wunused 
-mcpu=pentiumpro -O3 -fno-omit-frame-pointer' 'CXXFLAGS=-Wimplicit -Wreturn-type 
-Wswitch -Wtrigraphs -Wcomment -W -Wchar-subscripts -Wformat -Wparentheses 
-Wsign-compare -Wwrite-strings -Woverloaded-virtual -Wsign-promo -Wreorder 
-Wctor-dtor-privacy -Wnon-virtual-dtor -felide-constructors -fno-exceptions -fno-rtti 
-mcpu=pentiumpro -O3 -fno-omit-frame-pointer' 'CXX=gcc'



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Aaron Clausen
On Wed, 18 Dec 2002, Gerald Jensen wrote:

 Joe is right ... we switched from another SQL server to MySQL in 1999, and
 have never looked back.

 MySQL has been rock solid for our applications, the MySQL development team
 is great to work with, and our customers like it.


That's been my experience as well.  We have an in-house billing system which
I originally wrote to work with MS-Access.  I converted the whole thing over
to MySQL via ODBC in June 2000, and it has worked flawlessly ever since.  We
run it under Win2k, though I'm seriously thinking of moving the database
server over to Linux in the next six months.  But MySQL has been rock solid.
I have lost no data, save through my own stupidity, at any point.  I would
recommend it without reservations.

-- 
Aaron Clausen


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




  1   2   >