Re: MySQL Platform Migration

2016-09-13 Thread Ryan Coleman
Right...

I don't like seeing people being shredded by a high strung personality because 
they simply don't understand things at your level, Reiny. 

Be nice to people and I won't say anything. Be an ass and I will. 

> On Sep 13, 2016, at 06:39, Reindl Harald  wrote:
> 
> 
> 
>> Am 13.09.2016 um 12:13 schrieb Johan De Meersman:
>> - Original Message -
>>> From: "Ryan Coleman" 
>>> Subject: Re: MySQL Platform Migration
>>> 
>>> Because they want to be belittled by european jackasses online.
>> 
>> The thing is, while he has a bit of a rough edge, his technical advice is 
>> always solid. What has your contribution been, outside of insulting him, and 
>> for some reason, an entire continent?
> 
> this guy is just mentally ill and the next steps are:
> 
> * starting off-list mails (already happened)
> * can't stand the repsone
> * add a list-moderator to CC
> * setup something in his MUA which send all read messages back
>  to me (includig old ones from other lists he reads again)
> * can't stand the response to that action
> * forwards that responses to my boss
> * after having enough of the game and reject his
>  envelope asking for list-removal by pretending
>  that my server sends bounces to all his list mails while
>  in fact it only rejects private email
> 
> https://mail-archives.apache.org/mod_mbox/spamassassin-users/201608.mbox/%3cbb0c1584-f56a-4c1e-8b58-0d4b7d9a8...@cwis.biz%3E
> 
> 
> the reason for that personal hate (statet on a different list) is that 
> repsone from last year: https://marc.info/?l=mysql&m=144526386203911&w=4
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL Platform Migration

2016-09-12 Thread Ryan Coleman
Dear sir,

You do not realize that there aren’t always sys admins on these lists. Your 
proven track record of asshole first, kid gloves later drives people away.

Your fight to change mailing lists is one which only you seem to share.

Goodnight. 

> On Sep 12, 2016, at 7:27 AM, Reindl Harald  wrote:
> 
> first: get rid of "reply-all"o n mailing-lists, you break others "reply-list" 
> button with the second copy which arrives usually faster the the list one
> 
> Am 12.09.2016 um 13:35 schrieb Harrie Robins:
>> It is bad practice to do so in my eyes (and yes that is an *opinion*).
>> When you advice people, instruct them to take the proven route, as
>> described in the mysql documentation
> 
> proven route?
> 
> proven is that you can *test* how it works out by
> * hot rsync
> * shut down the old server
> * cold rsync
> * start the old server
> which is scriptable to minimize downtime
> 
> after that (while the old machine is still in production) you figure out what 
> needs to get adopted in the configuration
> 
> then you test your software, prove that everything works fine
> 
> in the meantime you can test as often as you want the hot-cold-rsync to 
> refresh the mysql databases from production - and *now you have proven* that 
> everything works
> 
> since you have proven that successful you can write a final script which does 
> the sync (over ssh with certificates) a last time, restart the old servers 
> network servioce with a prepared network address and on the new server take 
> over the old ip adress
> 
> and trust me: that way you minimize downtime, have a proven *by yourself* way 
> to go which works, is tested and when correct done nobody notices that the 
> machine and undrlying operating system changed
> 
> after doing that dozens of times for thousands of mysql databases i know what 
> i am doing and call bad practice ways which take depending on database sizes 
> hours and beware god something goes wrong with your dump
> 
>> Second, mastering mysqldump should be golden standard for any DBA.
>> Telling someone that asks for guidance to simply copy around some files
>> is bad practice as you do not know the level of expertise involved.
> 
> the golden standard for any sysadmin is to know where his datafiles and 
> configuration files are stored and how they work together - independent of 
> the software type
> 
>> Regards,
>> Harrie
>> 
>> On 12 September 2016 at 11:03, Reindl Harald > > wrote:
>> 
>> 
>> 
>>Am 12.09.2016 um 10:48 schrieb Harrie Robins:
>> 
>>Ok let's drop this. Simply copying files to migrate a server is
>>not the
>>approach to take (in my humble opinion)
>> 
>> 
>>And why?
>> 
>>When you start with "Get the same release-version of mysql running
>>on the target platform" and dump/load what's the point in playing
>>around with dump-files when you hava binary datafiles which can be
>>used on the destination
>> 
>>and no - i am not talking about theory
>> 
>>i migrated a hosting-infrastructure with dozens of servers from OSX
>>PPC to OSX Intel and later to Linux x86_64 with just rsync the
>>mysql-datadir
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL Platform Migration

2016-09-11 Thread Ryan Coleman
Because they want to be belittled by european jackasses online.

On Sep 10, 2016, at 11:56 PM, Reindl Harald  wrote:
> 
> 
> Am 11.09.2016 um 06:36 schrieb Suresh Rajagopal:
>> Is the mysql datafile compatible with different operation system ? I have 
>> not done this in the past.
> 
> why shouldn't it when the identical software is running?
> it's just a bunch of files used by mysql
> 
> as said copy it to the new machine and you are done - why do people these 
> days not just try out things, look if it works and when it don't asking 
> questions?
> 
>> - Original Message -
>> From: Reindl Harald 
>> To: mysql@lists.mysql.com
>> Sent: Saturday, September 10, 2016 7:19 PM
>> Subject: Re: MySQL Platform Migration
>> 
>> Am 11.09.2016 um 00:26 schrieb Suresh Rajagopal:
>>> Is there any documentation for migrating mysql 5.0 database from darwin to 
>>> Linux
>> 
>> what do you need to migrate?
>> 
>> shutdown mysqld, copy the datadir to the new machine, start mysqld with
>> adopted configuration on the new machine - done
>> 
>> P.S:
>> don't cross-post on mysql and mariadb list
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: a

2015-10-19 Thread Ryan Coleman
You’re the one in grade school that always reminded us the teacher might be 
coming back soon and we should behave, right?


> On Oct 18, 2015, at 4:40 PM, Reindl Harald  wrote:
> 
> what about stop that bullshit or at least purge the list after press 
> reply-all?
> 
> Am 18.10.2015 um 22:52 schrieb bluethu...@gmail.com:
>> e
>> 
>> Sent from my iPhone
>> 
>>> On Oct 18, 2015, at 4:12 PM, Daevid Vincent  wrote:
>>> 
>>> d
>>> 
>>>> -Original Message-
>>>> From: ryan.esca...@gmail.com [mailto:ryan.esca...@gmail.com] On Behalf
>>>> Of Ryan Escarez
>>>> Sent: Friday, October 16, 2015 2:47 AM
>>>> To: Ryan Coleman
>>>> Cc: Dennis Ruiz; mysql-le...@lists.mysql.com
>>>> Subject: Re: a
>>>> 
>>>> c
>>>> 
>>>> On Fri, Oct 16, 2015 at 3:01 PM, Ryan Coleman 
>>>> wrote:
>>>> 
>>>>> b
>>>>> 
>>>>>> On Oct 15, 2015, at 10:07 PM, Dennis Ruiz  wrote:
>>>>>> 
>>>>>> a
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: a

2015-10-16 Thread Ryan Escarez
c

On Fri, Oct 16, 2015 at 3:01 PM, Ryan Coleman  wrote:

> b
>
> > On Oct 15, 2015, at 10:07 PM, Dennis Ruiz  wrote:
> >
> > a
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


-- 
Ryan Escarez
CEBB F1E4 1E39 EC48 F05D  6B72 9C11 DD88 5E39 E471
$ gpg --keyserver pgp.mit.edu --recv-keys 0x5E39E471
main(k){float i,j,r,x,y=-16;while(puts(""),y++<15)
for(x=0;x++<84;putchar(" .:-;!/>)|&IH%*#"[k&15]))
for(i=k=r=0;j=r*r-i*i-2+x/25,i=2*r*i+y/10,j*j+i*i<11&&k++<111;r=j);}


Re: a

2015-10-16 Thread Ryan Coleman
b

> On Oct 15, 2015, at 10:07 PM, Dennis Ruiz  wrote:
> 
> a
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Using Joins/Unions

2015-08-04 Thread Ryan Coleman
No but there should be. If there's not my task is useless. 

Secondly yes. Unique name on it too. 

--
Ryan Coleman
Publisher, d3photography.com
ryan.cole...@cwis.biz
m. 651.373.5015
o. 612.568.2749

> On Aug 4, 2015, at 17:33, Wm Mussatto  wrote:
> 
>> On Tue, August 4, 2015 11:19, Ryan Coleman wrote:
>> I have been a MySQL user and supporter for over a decade (since 2001) and
>> I am almost ashamed to admit that I haven’t the faintest idea on how to do
>> joins and unions.
>> 
>> I have a specific query I would love to run…
>> 
>> I have two tables, one with Unique data (“images”) and one with
>> corresponding paths but many replicated records (“files”).
>> 
>> I want to run a query that takes the results from /images/ and also
>> searches /images.file/ as a LIKE statement from /files.path/, sort by
>> /files.filesize/ in descending order returning just the first record
>> (largest file size).  There may be up to 750 records from /images/ and
>> thusly could be 3000+ from /files/.
>> 
>> How on earth do I do this?
>> 
>> —
>> Ryan
> First question, will there always be at least one record in the files
> table for every record in the images table?  That controls the kind of
> join you will use.  I don't think that a union is a player. Also, is there
> a unique record ID in each of the table?
> --
> William R. Mussatto
> Systems Engineer
> http://www.csz.com
> 909-920-9154
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Using Joins/Unions

2015-08-04 Thread Ryan Coleman
I have been a MySQL user and supporter for over a decade (since 2001) and I am 
almost ashamed to admit that I haven’t the faintest idea on how to do joins and 
unions.

I have a specific query I would love to run…

I have two tables, one with Unique data (“images”) and one with corresponding 
paths but many replicated records (“files”).

I want to run a query that takes the results from /images/ and also searches 
/images.file/ as a LIKE statement from /files.path/, sort by /files.filesize/ 
in descending order returning just the first record (largest file size).  There 
may be up to 750 records from /images/ and thusly could be 3000+ from /files/.

How on earth do I do this?

—
Ryan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql_secure_installation

2011-12-29 Thread Ryan Dewhurst
Hi Shawn,

I would assume that MySQL is installed mostly on production servers
rather than in class room environments.

Wouldn't it make more sense for MySQL to be secure by default rather
than insecure by default?

It would make more sense to me if there was a
'mysql_insecure_installation' script that did the opposite and the
steps done by 'mysql_secure_installation' were implemented by default.

I suspect many developers are not even aware of
mysql_secure_installation or the steps that it takes and the vast
majority do not run it or the do the steps at all.

In my opinion, additional steps shouldn't be taken to make MySQL more
secure, instead additional steps should be taken to make it insecure
if that is what is needed in certain environments.

Thank you for the reply.

Ryan

On Tue, Dec 27, 2011 at 4:08 PM, Shawn Green (MySQL)
 wrote:
> Hello Ryan,
>
>
> On 12/18/2011 15:36, Ryan Dewhurst wrote:
>>
>> Hi,
>> Does anyone know why what's done in 'mysql_secure_installation' [0]
>> isnot part of the default mysql installation?
>> [0] http://dev.mysql.com/doc/refman/5.0/en/mysql-secure-installation.html
>> Thank you,Ryan Dewhurst
>> P.S. I also asked this question on the
>> forums:http://forums.mysql.com/read.php?30,506069,506069#msg-506069
>>
>
> The script simply automates the steps documented in our manual, here:
> http://dev.mysql.com/doc/refman/5.0/en/default-privileges.html
>
> If you want to see the script in action:
> * repeat a fresh install
> * enable the General Query Log
> * run the script.
>
> The General Query Log stores a copy of every command sent to a MySQL server
> before the server even parses it. It's there as a diagnostic tool and should
> not be enabled on a production machine unless there is a specific need to do
> so.
>
> The steps of 'mysql_secure_installation' are not performed by default
> because many people want to just get to know MySQL before putting it into
> full production. This is most easily performed (especially in a classroom
> setting) with an unsecured installation. Also the steps to secure the
> installation can be leveraged as an excellent teaching tool for:
>
> a) How MySQL accounts are authenticated
> b) Where the account information is stored
> c) The different levels of authentication supported by MySQL.
>
> For those who don't want to read or learn, or for those who simply want to
> automate their installation, there is the script.
>
> --
> Shawn Green
> MySQL Principal Technical Support Engineer
> Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
> Office: Blountville, TN

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysql_secure_installation

2011-12-18 Thread Ryan Dewhurst
Hi,
Does anyone know why what's done in 'mysql_secure_installation' [0]
isnot part of the default mysql installation?
[0] http://dev.mysql.com/doc/refman/5.0/en/mysql-secure-installation.html
Thank you,Ryan Dewhurst
P.S. I also asked this question on the
forums:http://forums.mysql.com/read.php?30,506069,506069#msg-506069

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: One database per server architecture

2011-10-18 Thread Mark, Ryan
Thanks guys!

I figured there would be no clearcut answer. I was curious if there were any 
nuggets of wisdom or rules of thumb I was overlooking.

We initially launched sharing db servers, but had performance trouble. We've 
since tracked down problems in our app server config (WordPress does not like 
to share an app server) and added memcache. So I think we're ripe for another 
try.

Thanks again,
Ryan

On Oct 18, 2011, at 5:39 PM, Lydia Rowe wrote:

> You have answered your own question, good sir. Or so I have come to believe. 
> Is your primary concern $? Is your organization focused on the bottom line? 
> Option B allows you to move in the costsaving direction. I imagine some may 
> object to sharing resources between (potentially exploitable) WordPress 
> installations but hey, so goes the show, as they say around these parts here!
> 
> --
> Howdy,
> 
> Lydia
> 
> On Oct 18, 2011 5:19 PM, "Mark, Ryan"  wrote:
> Architecture question I'm having trouble finding an answer to:
> 
> I run four WordPress websites. I have mysql setup in a write master/read 
> replica slave configuration on Amazon. There is one master that all the 
> WordPress instances write to. I'm trying to figure out how to setup the read 
> replicas.
> 
> Should I:
> 
> A. Give each WordPress instance it's own read replica?
> 
> or
> 
> B. Have all the WordPress instances use all the read replicas?
> 
> I have done A before, and it works fine. But by going with B I can 
> consolidate, use beefier hardware and save money.
> 
> Any thoughts?
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=ly...@lydiarowe.com
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



One database per server architecture

2011-10-18 Thread Mark, Ryan
Architecture question I'm having trouble finding an answer to:

I run four WordPress websites. I have mysql setup in a write master/read 
replica slave configuration on Amazon. There is one master that all the 
WordPress instances write to. I'm trying to figure out how to setup the read 
replicas.

Should I:

A. Give each WordPress instance it's own read replica?

or

B. Have all the WordPress instances use all the read replicas?

I have done A before, and it works fine. But by going with B I can consolidate, 
use beefier hardware and save money.

Any thoughts?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



how to generate a data set then join with in on fly?

2011-01-10 Thread Ryan Liu
Hi,

In MySQL, is that possible to generate a data set join with it on fly (without 
create an temporary table)?

e.g. for a report used by a graphic tool, it requires data in all dates, even 
it 
is null. Can I

select vacationT.* left join ( all dates d in the past 3 years) on 
vacationT.`date` = d  ?

Thanks,
Ryan


  

Parameter(s) need to take care when setting replication over WAN

2010-12-03 Thread Ryan Chan
Hello,

I am setting MySQL (5.0) replication over WAN for backup purpose (with
SSL enabled).

I have read the document:
http://dev.mysql.com/doc/refman/5.0/en/replication.html, and wonder if
any special tunning needed for using replication over WAN.


Are there any configuration parameter(s) need special handling? What I
understand are:

-  --master-connect-retry=60
- --master-retry-count=86400

Any others?


Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Storage of UTF-8 char in MySQL

2010-08-14 Thread Ryan Chan
According to this document:
http://dev.mysql.com/doc/refman/5.0/en/charset-unicode.html

It said MySQL support UTF-8 using one to three bytes per character.

But I have created a test table:

-- create table test ( c char(5) ) default charset =utf8;

>From the table status, the data length is alway a multiple of 16.

So how does it support 3 byte UTF-8 in practice?


Thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



int(10) va int(11)

2010-06-14 Thread Ryan Chan
Assume MySQL int range (unsigned) is from 0 to 4294967295

There are total 10 digits.

Why a lot of tutorial in the web tell you to declare,

e.g.

 CREATE TABLE t1  (f INT(11) UNSIGNED);

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why UTF8 need 24bit in MySQL?

2010-06-08 Thread Ryan Chan
Hi,


On Tue, Jun 8, 2010 at 12:44 AM, Warren Young  wrote:
>  The Unicode consortium has stated that Unicode will
> never require more than 21 bits per character[*], and 24 bits is the next
> even multiple of 8 up from that.

Maybe off topic, but just curious...If 3 bytes is enough for all
Unicode codepoint, then what is the user of 4byte UTF-8 ?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Why UTF8 need 24bit in MySQL?

2010-06-07 Thread Ryan Chan
http://dev.mysql.com/doc/refman/5.0/en/charset-unicode.html

Since MySQL only support BMP, so in fact 16 bit is needed actually?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Table level locking when inserting auto-increment PK to InnoDB

2009-12-23 Thread Ryan Chan
Hey.

Back to few years ago, InnoDB require table level locking when
inserting auto-increment PK to the table, and Heikki said there will
be a fix.

Is this problem still exist now?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help saving MySQL

2009-12-12 Thread Ryan Chan
On Sun, Dec 13, 2009 at 11:48 AM, Michael Dykman  wrote:
> I hope the only reason this thread  is so quiet is because we are all
> busy notifying our friends.  There are a hell of a lot more users
> invested in MySQL than those who read this list.  Spread the word!
>

Let's stand up today to face the evil!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Table size vs Memory requirements?

2009-11-22 Thread Ryan Chan
Hello,

Is it common heard from people that if you have large table (assume
MyISAM in my case), you need large memory in order to have the
key/index in memory for performance, otherwise, table scan on disk is
slow.

But how to estimate how much memory I need?

Consider a simple case, a MyISAM table is 10GB in size, with 2GB
index, how much memory I need?


Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Selecting a column with a regular expression applied to it?

2008-09-11 Thread Ryan Stille
Thank you for the link Darryle, but I don't think you read through my 
whole question.  I have already read through that page, and I see how to 
use regex as a condition for a select.  What I am interested in is 
applying a regex to a column as I select it.Selecting a varchar 
column with all the non-ascii characters removed, for example.


-Ryan

Darryle Steplight wrote:

Hi Ryan,

MySql does have regular expressions. See Link
http://dev.mysql.com/doc/refman/5.0/en/regexp.html

On Thu, Sep 11, 2008 at 9:52 AM, Ryan Stille <[EMAIL PROTECTED]> wrote:
  

From looking at the MySQL 5 docs, it doesn't look like there is any way to
select a column with a regular expression applied to it?

I have a column that has ended up with some non ascii characters in it,
probably vertical tabs and things like that from MS Excel.  I need to sort
by this field but its not coming out right because some of the values have
these bad characters at the beginning.   I'd like to select that column with
a regex applied to it that strips out all the non-ascii chars, then sort by
that field.  Is this possible?

All the examples I saw are just using the regex in the where clause.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Selecting a column with a regular expression applied to it?

2008-09-11 Thread Ryan Stille
From looking at the MySQL 5 docs, it doesn't look like there is any way 
to select a column with a regular expression applied to it?


I have a column that has ended up with some non ascii characters in it, 
probably vertical tabs and things like that from MS Excel.  I need to 
sort by this field but its not coming out right because some of the 
values have these bad characters at the beginning.   I'd like to select 
that column with a regex applied to it that strips out all the non-ascii 
chars, then sort by that field.  Is this possible?


All the examples I saw are just using the regex in the where clause.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my.cnf optimization

2008-09-04 Thread Ryan Schwartz

On Sep 4, 2008, at 1:48 PM, Ranjeet Walunj wrote:


Hi ryan.

As pointed by Johnny, it is difficult to give optimization advise  
without exactly knowing the performance of your machine.


I'm assuming you are using the machine as Database Server and not  
running application (Web/other) on the same.

(And you are using InnoDB as engine)

I would suggest keeping innodb_buffer_pool_size pretty high (+20G)


This is a dedicated MySQL server - nothing else running on it at all,  
so all that RAM is up for grabs. Mysqld is running in 64 bits, and  
after bumping innodb_buffer_pool_size to 4G our performance concerns  
are completely gone - I'll ramp that up after doing a bit more  
research on InnoDB tuning.



Please read up here :
http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/

Also if possible get a copy of "High performance MySQL" and go  
through it as it covers many good techniques for high performance  
MySQL setup.


I'll have to crack open my copy - haven't read through it in a while,  
and quite honestly I had forgot to make any adjustments on the InnoDB  
side of things because when I inherited the old MySQL server we were  
on the devs were mostly using MyISAM tables.


Some of the default InnoDB settings are horribly wrong from high  
performance point of view. Can you post your complete my.cnf on  
pastebin or somewhere ?


http://pastebin.com/m2ebec4f6 includes everything in my.cnf but  
comments and blank lines, SHOW STATUS\G, SHOW INNODB STATUS\G, AND  
SHOW VARIABLES\G


All your help is much appreciated - I just wonder if there's not been  
a simple script set up by someone to autogen my.cnf based on system  
variables like available RAM, etc? Surely there's some general  
recommendations depending on those specific system things, rather than  
"just copy my-huge.cnf and modify"...

--
Ryan Schwartz



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: my.cnf optimization

2008-09-04 Thread Ryan Schwartz

Here's all the buffer variables:

mysql> show variables like '%buffer%'\G
*** 1. row ***
Variable_name: bulk_insert_buffer_size
Value: 8388608
*** 2. row ***
Variable_name: innodb_buffer_pool_awe_mem_mb
Value: 0
*** 3. row ***
Variable_name: innodb_buffer_pool_size
Value: 8388608
*** 4. row ***
Variable_name: innodb_log_buffer_size
Value: 1048576
*** 5. row ***
Variable_name: join_buffer_size
Value: 131072
*** 6. row ***
Variable_name: key_buffer_size
Value: 402653184
*** 7. row ***
Variable_name: myisam_sort_buffer_size
Value: 67108864
*** 8. row ***
Variable_name: net_buffer_length
Value: 16384
*** 9. row ***
Variable_name: preload_buffer_size
Value: 32768
*** 10. row ***
Variable_name: read_buffer_size
Value: 67104768
*** 11. row ***
Variable_name: read_rnd_buffer_size
Value: 67104768
*** 12. row ***
Variable_name: sort_buffer_size
Value: 67108856
12 rows in set (0.00 sec)

I'll bump innodb_buffer_pool_size to 2G and see how that goes. Thanks  
for the tips, if there's additional innodb tuning parameters folks  
tend to hit first I'd be glad to try them as well.

--
Ryan Schwartz


On Sep 4, 2008, at 8:16 AM, Johnny Withers wrote:

If you do have a fair about of innodb tables you can increase  
performance by increasing the size of innodb_buffer_pool_size.  
According to your status output, you are currently using the entire  
buffer pool:


*** 137. row ***
Variable_name: Innodb_buffer_pool_pages_free
  Value: 0

It seems to be set small anyway:

Variable_name: Innodb_buffer_pool_pages_data
  Value: 501

It also seems that you do have alot of innodb data:

*** 151. row ***
Variable_name: Innodb_data_read
  Value: 27743085907968


Again, i don't know what you have innodb_buffer_pool_size set to,  
but you have plenty of RAM, I'd set it to about 4.5GB and see if  
that helps. I also don't know mucha bout OS X and your hardware.. is  
it 64bit? If it is not 64bit, you probably can't use 4.5GB as the  
size of your buffer pool.


-johnny

On 9/3/08, Ryan Schwartz <[EMAIL PROTECTED]> wrote: All,

We're seeing a huge surge in our qps and I'd like to make sure we're  
tuned as well as we can be. I'm wondering if I've got some variables  
maybe set too large (is that even possible?) ? We do have a fair bit  
of innodb, so perhaps I should add some non-defaults there, but I'm  
not so sure where to start with that.


Hardware is an Apple Xserve, 2x Quad-Core Intel @ 3Ghz, 32GB RAM, 3x  
280 GB SAS drives in Raid-5 config, OS is Mac OS X 10.5.4 and here's  
my my.cnf:


[billie:~] admin$ egrep -v '^$|^#' /etc/my.cnf
[client]
port= 3306
socket  = /var/mysql/mysql.sock
[mysqld]
port= 3306
socket  = /var/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 50M
table_cache = 2048
sort_buffer_size = 64M
read_buffer_size = 64M
read_rnd_buffer_size = 64M
myisam_sort_buffer_size = 64M
thread_cache_size = 100
query_cache_size = 64M
thread_concurrency = 16
skip-thread-priority
max_connections = 750
old-passwords
innodb_file_per_table
innodb_flush_log_at_trx_commit=1
sync_binlog=1
log-slow-queries
long_query_time=2
log_queries_not_using_indexes
log-bin=mysql-bin
server-id   = 4
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout

SHOW STATUS\G output follows my sig below...

My devs are adding indexes where the slow query log is pointing  
them, but any suggestions on how better to tune things up would be  
much appreciated. I'm not sure what else to tune here but we're  
getting bursts of 1200+ queries per second regularly and seeing  
things slow down significantly.


Best,
--
Ryan Schwartz

mysql> SHOW STATUS\G
*** 1. row ***
Variable_name: Aborted_clients
  Value: 1656
*** 2. row ***
Variable_name: Aborted_connects
  Value: 3
*** 3

my.cnf optimization

2008-09-03 Thread Ryan Schwartz

All,

We're seeing a huge surge in our qps and I'd like to make sure we're  
tuned as well as we can be. I'm wondering if I've got some variables  
maybe set too large (is that even possible?) ? We do have a fair bit  
of innodb, so perhaps I should add some non-defaults there, but I'm  
not so sure where to start with that.


Hardware is an Apple Xserve, 2x Quad-Core Intel @ 3Ghz, 32GB RAM, 3x  
280 GB SAS drives in Raid-5 config, OS is Mac OS X 10.5.4 and here's  
my my.cnf:


[billie:~] admin$ egrep -v '^$|^#' /etc/my.cnf
[client]
port= 3306
socket  = /var/mysql/mysql.sock
[mysqld]
port= 3306
socket  = /var/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 50M
table_cache = 2048
sort_buffer_size = 64M
read_buffer_size = 64M
read_rnd_buffer_size = 64M
myisam_sort_buffer_size = 64M
thread_cache_size = 100
query_cache_size = 64M
thread_concurrency = 16
skip-thread-priority
max_connections = 750
old-passwords
innodb_file_per_table
innodb_flush_log_at_trx_commit=1
sync_binlog=1
log-slow-queries
long_query_time=2
log_queries_not_using_indexes
log-bin=mysql-bin
server-id   = 4
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout

SHOW STATUS\G output follows my sig below...

My devs are adding indexes where the slow query log is pointing them,  
but any suggestions on how better to tune things up would be much  
appreciated. I'm not sure what else to tune here but we're getting  
bursts of 1200+ queries per second regularly and seeing things slow  
down significantly.


Best,
--
Ryan Schwartz

mysql> SHOW STATUS\G
*** 1. row ***
Variable_name: Aborted_clients
   Value: 1656
*** 2. row ***
Variable_name: Aborted_connects
   Value: 3
*** 3. row ***
Variable_name: Binlog_cache_disk_use
   Value: 276
*** 4. row ***
Variable_name: Binlog_cache_use
   Value: 6416113
*** 5. row ***
Variable_name: Bytes_received
   Value: 134
*** 6. row ***
Variable_name: Bytes_sent
   Value: 70104
*** 7. row ***
Variable_name: Com_admin_commands
   Value: 0
*** 8. row ***
Variable_name: Com_alter_db
   Value: 0
*** 9. row ***
Variable_name: Com_alter_table
   Value: 0
*** 10. row ***
Variable_name: Com_analyze
   Value: 0
*** 11. row ***
Variable_name: Com_backup_table
   Value: 0
*** 12. row ***
Variable_name: Com_begin
   Value: 0
*** 13. row ***
Variable_name: Com_call_procedure
   Value: 0
*** 14. row ***
Variable_name: Com_change_db
   Value: 0
*** 15. row ***
Variable_name: Com_change_master
   Value: 0
*** 16. row ***
Variable_name: Com_check
   Value: 0
*** 17. row ***
Variable_name: Com_checksum
   Value: 0
*** 18. row ***
Variable_name: Com_commit
   Value: 0
*** 19. row ***
Variable_name: Com_create_db
   Value: 0
*** 20. row ***
Variable_name: Com_create_function
   Value: 0
*** 21. row ***
Variable_name: Com_create_index
   Value: 0
*** 22. row ***
Variable_name: Com_create_table
   Value: 0
*** 23. row ***
Variable_name: Com_create_user
   Value: 0
*** 24. row ***
Variable_name: Com_dealloc_sql
   Value: 0
*** 25. row ***
Variable_name: Com_delete
   Value: 0
*** 26. row ***
Variable_name: Com_delete_multi
   Value: 0
*** 27. row ***
Variable_name: Com_do
   Value: 0
*** 28. row ***
Variable_name: Com_drop_db
   Value: 0
***

Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille

Ryan Stille wrote:
Thanks for the help, I rewrote it as a subselect and it deleted all 
10K records in two seconds.


DELETE subTable FROM subTable LEFT OUTER JOIN parentTable ON
subTable.parentID = parentTable.ID WHERE parentTable.ID IS NULL

-Ryan




Whoops, I meant that I rewrote it as a JOIN.  It originally was a subselect.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
Thanks for the help, I rewrote it as a subselect and it deleted all 10K 
records in two seconds.


DELETE subTable FROM subTable LEFT OUTER JOIN parentTable ON
subTable.parentID = parentTable.ID WHERE parentTable.ID IS NULL

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
Also the table seems to be locked while running this delete... thats not 
going to be very good when I need to run it on production.   Is there a 
way to have it not be locked during this delete?


I'm thinking of creating a script to delete in 10 row increments until 
they are all gone.


Thinking about it now... its probably the IN clause, isn't it?  I've 
heard those are slow.  Hopefully someone will have a better idea.


Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why is this delete so slow? ( 90 seconds per 100 records)

2008-04-11 Thread Ryan Stille
I'm trying to delete some orphaned records from a table that has about 
150K records.  Here is my delete:


|CREATE TEMPORARY TABLE deleteids AS (
 SELECT subTable.ID from subTable
 LEFT OUTER JOIN parentTable ON subTable.ID = parentTable.ID
 WHERE  parentTable.ID IS NULL
);
DELETE FROM subTable WHERE ID IN (SELECT ID FROM deleteids);
DROP TABLE deleteids;|


The DELETE statement itself is whats taking so long.   About 90 seconds 
per 100 records.  Should it be so slow? Thats almost 1 second per 
record!  There are 10K abandoned records I need to delete.  Here is the 
table definition:


CREATE TABLE `subTable` (
 `ID` int(11) unsigned NOT NULL auto_increment,
 `DonorID` int(10) unsigned NOT NULL default '0',
 `MedConID` int(11) unsigned NOT NULL default '0',
 `MedConSubID` int(11) unsigned NOT NULL default '0',
 `FamilyID` int(11) unsigned NOT NULL default '0',
 `cbResult` tinyint(1) unsigned NOT NULL default '0',
 `deleted` tinyint(1) unsigned NOT NULL default '0',
 PRIMARY KEY  (`ID`),
 KEY `MedConID` (`MedConID`),
 KEY `MedConSubID` (`MedConSubID`),
 KEY `FamilyID` (`FamilyID`),
 KEY `DonorID` (`DonorID`),
 KEY `deleted` (`deleted`)
) ENGINE=MyISAM AUTO_INCREMENT=292088 DEFAULT CHARSET=latin1;



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Name-based virtual servers?

2007-12-20 Thread ryan
Okay, I've convinced myself that this can't be done.  In the http world,
it can be done only because http/1.1 includes the hostname in the request.
If mysql doesn't do that, there's no way to handle all the cases.

So I'll either have to settle for people remembering their own socket
files, or ...

I would settle for a single shared database in which users could create
databases and drop their own databases but not other users'.  They'll have
to live with namespace collisions.

Is that possible to do with mysql's permissions?

Thanks.
--Ryan

On Thu, 20 Dec 2007 [EMAIL PROTECTED] wrote:

> Hi.  I'd like to set up something like apache's name-based virtual
> hosting.
>
> I read the docs for mysqlmanager.  It told me how to set it up to run
> multiple instances of mysql on one machine, where each instance had its
> own port number and socket file.
>
> I'd like to have my users connect to their own servers instead, and not
> have to remember a port number.  Like:
>   mysql -h mysql.username.domain.com -u username -p
>
> and then they'd be pointed at their appropriate instance.
>
> Anybody know how I can do that?
>
> Thanks.
> --Ryan
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Name-based virtual servers?

2007-12-20 Thread ryan
Hi.  I'd like to set up something like apache's name-based virtual
hosting.

I read the docs for mysqlmanager.  It told me how to set it up to run
multiple instances of mysql on one machine, where each instance had its
own port number and socket file.

I'd like to have my users connect to their own servers instead, and not
have to remember a port number.  Like:
mysql -h mysql.username.domain.com -u username -p

and then they'd be pointed at their appropriate instance.

Anybody know how I can do that?

Thanks.
--Ryan

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem with master/slave replication

2007-11-16 Thread Ryan Klein
I am having an issue that I cannot determine the cause. We have a master server 
that is actually a production server and a slave server that is a fall back but 
after around 10 days, it stops coping data and the servers fall out of sync. 
Here is my my.cnf file for the master server:


key_buffer  = 16M
max_allowed_packet  = 16M
thread_stack= 128K
thread_cache_size   = 8
#max_connections= 100
#table_cache= 64
#thread_concurrency = 10
#
# * Query Cache Configuration
#
query_cache_limit   = 1M
query_cache_size= 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
server-id   = 1
log_bin = /var/log/mysql/mysql-bin
# WARNING: Using expire_logs_days without bin_log crashes the server! See 
README.Debian!

#expire_logs_days   = 10
max_binlog_size = 1000M
binlog_do_db= mydns
sync_binlog = 1
#binlog_ignore_db   = include_database_name
#

and for the slave server:

#
# * Fine Tuning
#
key_buffer  = 16M
max_allowed_packet  = 16M
thread_stack= 128K
thread_cache_size   = 8
#max_connections= 100
#table_cache= 64
#thread_concurrency = 10
#
# * Query Cache Configuration
#
query_cache_limit   = 1M
query_cache_size= 16M
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
#log_slow_queries   = /var/log/mysql/mysql-slow.log
#long_query_time = 2
#log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
server-id   = 2
log_bin = /var/log/mysql/mysql-bin.log
# WARNING: Using expire_logs_days without bin_log crashes the server! See 
README.Debian!

#expire_logs_days   = 10
max_binlog_size = 1000M
#binlog_do_db   = include_database_name
#binlog_ignore_db   = include_database_name
#master-host= 74.202.241.24
#master-user= slavedb
#master-password= )vKe8m)?Eod>
#master-connect-retry   = 60
#replicate-do-db= mydns
#

--
Regards,

Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow Subquery

2007-10-19 Thread Ryan Bates
I'm trying to determine why a subquery is slower than running two  
separate queries. I have a simple many-to-many association using 3  
tables: projects, tags and projects_tags. Here's the query I'm using  
to find the projects with a given tag:


SELECT * FROM projects WHERE id IN (SELECT projects_tags.project_id  
FROM tags, projects_tags WHERE tags.name='foo' AND  
projects_tags.project_id=projects.id);

(0.36 sec)

Compare that with splitting it into two queries:

SELECT projects_tags.project_id FROM tags, projects_tags WHERE  
tags.name='foo' AND projects_tags.project_id=projects.id

(0.00 sec) /* returns 1, 2, 3 */

SELECT * FROM projects WHERE id IN (1, 2, 3);
(0.00 sec)

Why is it so much faster? Looking at the explain statement (below) of  
the one with the subquery, it appears it's not using the primary key  
index on the projects table. Why is it that MySQL doesn't perform  
this simple optimization? And is there a solution that will allow me  
to still use a subquery?


I realize I can use a join instead of a subquery, but this is a  
simplified example.



Here's the explain statement:

*** 1. row ***
   id: 1
  select_type: PRIMARY
table: projects
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 15433
Extra: Using where
*** 2. row ***
   id: 2
  select_type: DEPENDENT SUBQUERY
table: tags
 type: ref
possible_keys: PRIMARY,index_tags_on_name
  key: index_tags_on_name
  key_len: 258
  ref: const
 rows: 1
Extra: Using where; Using index
*** 3. row ***
   id: 2
  select_type: DEPENDENT SUBQUERY
table: projects_tags
 type: ref
possible_keys: tag_id
  key: tag_id
  key_len: 5
  ref: my_database.tags.id
 rows: 10
Extra: Using where


Here's the table dumps:

CREATE TABLE `projects` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `tags` (
  `id` int(11) NOT NULL auto_increment,
  `name` varchar(255) default NULL,
  PRIMARY KEY  (`id`),
  KEY `index_tags_on_name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

CREATE TABLE `projects_tags` (
  `project_id` int(11) default NULL,
  `tag_id` int(11) default NULL,
  KEY `tag_id` (`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;


I'm using MySQL 5.0.37. Thanks in advance.

Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Need script to populate data.

2007-05-25 Thread Ryan Stille
This should do it. Save it in a file then run it like "perl newfile.pl". 
Save the output into a text file, then run it against your database.


$records = 100;
for ($i=1;$i<=$records;$i++) {
print "INSERT INTO MyTable (forumid, styleid, title, title_clean, " .
"description, description_clean, options) VALUES\n";
print "($i, 888, 'title $i', 'clean title $i', 'description $i', " .
"'clean description $i', 999);\n"
}

so:
perl newfile.pl > test_records.sql
mysql -p mydatabase < test_records.sql

-Ryan
http://www.stillnetstudios.com

SRM SRM wrote:



Group,

I'm an oracle dba and new to mysql. I've just recieved responsiblity 
to work on a mysql project. Basically I have 1 table:


forumid smallint(5) unsigned NO (null) 0 (null)
styleid smallint(5) unsigned NO (null) 0 (null)
title varchar(100) NO (null) (null) (null)
title_clean varchar(100) NO (null) (null) (null)
description text YES (null) (null)
description_clean text YES (null) (null)
options int(10) unsigned NO (null) 0 (null)


I need a script that will insert, say 1, records into this table. 
Note, I dont really care about the content of the data (ie, the title 
could be TITLE1, TITLE2, TITLE3...TITLE1, whatever). Im just 
interested in populating the table with X amount of rows and being 
able to perform some stress tests.


TIA

_
PC Magazine’s 2007 editors’ choice for best Web mail—award-winning 
Windows Live Hotmail. 
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_pcmag_0507 








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Ryan Stille

I use a script like this:

  #!/bin/bash
  DATE=`date +%A`

  DESTFILE="/home/mysql-backups/mysql-dump-$DATE"

  /usr/bin/mysqldump --skip-extended-insert -uroot -p"password" 
mydatabase > $DESTFILE.sql

  /usr/bin/zip -qjTm $DESTFILE.zip $DESTFILE.sql

I end up with:
mysql-dump-Friday.zip
mysql-dump-Saturday.zip
etc.

Modify to suit your needs.

-Ryan

Brian Dunning wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I 
need to dump it in CSV format and zip the file.


This is not on my server, and it's in production, so I don't want to 
risk testing different methods and possibly hanging up their server 
for a period of time, so I wanted to seek advice here first to find 
what's the best way to proceed.


I can easily use PHP to query the table for the results I want and 
write a file line by line and then zip it, but I'm worried that might 
take too long and hang up the machine. The other way to go is some 
kind of sql dump command, which I guess would be faster, but not sure 
how much control I'd have over the exact format of the file. Any 
suggestions which way I should proceed? Not hanging up their server is 
my prime concern.


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: import rss feed into mysql

2007-04-13 Thread Ryan Stille

Perl would be my language of choice to do something like this.

-Ryan

Thufir wrote:

what would be the quickest, easiest way to import, for example, the rss feed
<http://rss.gmane.org/gmane.comp.db.mysql.general> into mysql?

what would be some different approaches, pls?  I'm more interested in craigslist
stuff, but the above is a good example, I think.


thanks,

Thufir


  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch - easy check

2007-02-28 Thread Ryan Stille

Is MySQL on each one set to the same time zone?

SELECT@@global.time_zone;

-Ryan

Néstor wrote:

That was interesting.  I have 2 rhel 3 servers and they both have been
update to handle the DST.  They both yield different results when I 
ran the

command:
SERVER=RALPH
+---+---+ 


| UNIX_TIMESTAMP('2007-03-11 02:00:00') | UNIX_TIMESTAMP('2007-03-11
03:00:00') |
+---+---+ 


|  1173607200 | 1173610800 |
+---+--- 




and this for the other server :
SERVER=MAGGIE
+---+---+ 


| UNIX_TIMESTAMP('2007-03-11 02:00:00') | UNIX_TIMESTAMP('2007-03-11
03:00:00') |
+---+---+ 


|1173607200 |   1173607200 |
+---+--- 




Uhm.


Nestor

On 2/28/07, William R. Mussatto <[EMAIL PROTECTED]> wrote:


On Wed, February 28, 2007 14:10, Ryan Stille said:
> I am on 4.1.20-1.
>
> Maybe your OS isn't patched?
>
> Try this:  "SELECT @@global.time_zone;"
Won't help if you are on debian which is still on 4.0.
> If you get back "SYSTEM", then MySQL is looking to the OS for timezone
> data.  And its only loaded when MySQL starts, so if you haven't
> restarted MySQL since you patched your OS, you need to do that.
>
> -Ryan
>
> Jerry Schwartz wrote:
>> What version of MySQL are you using? I'm running 4.1.21, and that 
check

>> doesn't work even after I've updated (I think) the time zone tables.
>>
>> I should probably eyeball the output of mysql_tzinfo_to_sql.
>>
>> Regards,
>>
>> Jerry Schwartz
>> Global Information Incorporated
>> 195 Farmington Ave.
>> Farmington, CT 06032
>>
>> 860.674.8796 / FAX: 860.674.8341
>>
>>
>>
>>> -Original Message-
>>> From: Ryan Stille [mailto:[EMAIL PROTECTED]
>>> Sent: Saturday, February 24, 2007 4:28 PM
>>> To: mysql@lists.mysql.com
>>> Subject: Re: MySQL Daylight Savings Time Patch - easy check
>>>
>>> Ryan Stille wrote:
>>>
>>>> Paul DuBois wrote:
>>>>
>>>>> At 4:40 PM -0600 2/20/07, Ryan Stille wrote:
>>>>>
>>>>>> Is there an easy way to test to see if MySQL already has
>>>>>>
>>> the proper
>>>
>>>>>> tables loaded?
>>>>>>
>>>>>> -Ryan
>>>>>>
>>>>> Yes, reload them. :-)  After that, they're current! ...
>>>>>
>>>>>
>>> After digging around on the net for a while I found an easy
>>> way to tell
>>> if your MySQL installation is ready for the new daylight savings 
time.

>>>
>>> SELECT UNIX_TIMESTAMP('2007-03-11 02:00:00'),
>>> UNIX_TIMESTAMP('2007-03-11 03:00:00');
>>>
>>> This should return the same value, even though you are feeding it
>>> different times, because this is when the 1 hr change occurs.
>>>  I get the
>>> correct result on both of my machines.  On one of them I've run the
>>> suggested |mysql_tzinfo_to_sql command, on the other, the time zone
>>> tables are completely empty!
>>>
>>> Any wisdom on these time zone tables - are they ever used, should I
>>> populate them or not?
>>>
>>> -Ryan
>>>
>>> |
>>>
>>>
>>> --
>>> MySQL General Mailing List
>>> For list archives: http://lists.mysql.com/mysql
>>> To unsubscribe:
>>> http://lists.mysql.com/[EMAIL PROTECTED]
>>>
>>>
>>>
>>
>>
>>
>>
>>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch - easy check

2007-02-28 Thread Ryan Stille

I am on 4.1.20-1.

Maybe your OS isn't patched?

Try this:  "SELECT @@global.time_zone;"

If you get back "SYSTEM", then MySQL is looking to the OS for timezone 
data.  And its only loaded when MySQL starts, so if you haven't 
restarted MySQL since you patched your OS, you need to do that.


-Ryan

Jerry Schwartz wrote:

What version of MySQL are you using? I'm running 4.1.21, and that check
doesn't work even after I've updated (I think) the time zone tables.

I should probably eyeball the output of mysql_tzinfo_to_sql.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341


  

-Original Message-
From: Ryan Stille [mailto:[EMAIL PROTECTED]
Sent: Saturday, February 24, 2007 4:28 PM
To: mysql@lists.mysql.com
Subject: Re: MySQL Daylight Savings Time Patch - easy check

Ryan Stille wrote:


Paul DuBois wrote:
  

At 4:40 PM -0600 2/20/07, Ryan Stille wrote:


Is there an easy way to test to see if MySQL already has
  

the proper


tables loaded?

-Ryan
  

Yes, reload them. :-)  After that, they're current! ...



After digging around on the net for a while I found an easy
way to tell
if your MySQL installation is ready for the new daylight savings time.

SELECT UNIX_TIMESTAMP('2007-03-11 02:00:00'),
UNIX_TIMESTAMP('2007-03-11 03:00:00');

This should return the same value, even though you are feeding it
different times, because this is when the 1 hr change occurs.
 I get the
correct result on both of my machines.  On one of them I've run the
suggested |mysql_tzinfo_to_sql command, on the other, the time zone
tables are completely empty!

Any wisdom on these time zone tables - are they ever used, should I
populate them or not?

-Ryan

|


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]








  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: backup stratergy

2007-02-26 Thread Ryan Stille
Whats wrong with using the --single-transaction switch for backing up 
InnoDB tables?  What does the Hot Backup product do that this doesn't?


Thanks,
-Ryan


Juan Eduardo Moreno wrote:

Ananda,

For Innodb the best is Innodb Hot Backup ( www.innodb.com (US$) )

For MyISAM you can use a simple backup ( copy/paste) of your files. Also,
you can do snapshots using mysqldump.

Also, you can use Zmanda ( www.zmanda.com).

Regards,
Juan Eduardo


On 2/23/07, Ananda Kumar <[EMAIL PROTECTED]> wrote:


Hi All,
Can you please direct me to any good documentation for a good backup and
recovery stratergy for MyISAM and INNODB in mysql.

regards
anandkl



 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch - easy check

2007-02-24 Thread Ryan Stille
I'm not exactly sure what you should do, hence my earlier question about 
the empty time zone tables.  I get the same (correct) result on both of 
my servers - on one of them I've updated the MySQL time zone tables, and 
on the other the tables are empty, always have been.  Hopefully someone 
will weigh in on this.


Is your OS updated?  On Linux you can check by running 'zdump -v 
/etc/localtime | grep 2007'.  You should see some lines mentioning March 11.


If your OS is not ready for the change, that could be your problem.

You could also try updating the MySQL timezone tables, as was mentioned 
earlier in this thread.  I did mine with this command:

mysql_tzinfo_to_sql  /usr/share/zoneinfo | mysql mysql -p

And yes I got a few errors, as other people have mentioned in this 
thread.  But they all appear to be related to overseas timezones so I'm 
not too concerned about them.


-Ryan

Mike Blezien wrote:
Out of curiousity, what should be done if they results are different. 
We checked on one of boxes and got two different results:


SELECT UNIX_TIMESTAMP('2007-03-11 02:00:00'),
   -> UNIX_TIMESTAMP('2007-03-11 03:00:00');
+---+---+ 

| UNIX_TIMESTAMP('2007-03-11 02:00:00') | UNIX_TIMESTAMP('2007-03-11 
03:00:00') |
+---+---+ 

|117360 |
1173603600 |
+---+---+ 



Thx's
Mickalo
- Original Message - From: "Ryan Stille" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, February 24, 2007 3:28 PM
Subject: Re: MySQL Daylight Savings Time Patch - easy check



Ryan Stille wrote:

Paul DuBois wrote:

At 4:40 PM -0600 2/20/07, Ryan Stille wrote:
Is there an easy way to test to see if MySQL already has the 
proper tables loaded?


-Ryan


Yes, reload them. :-)  After that, they're current! ...



After digging around on the net for a while I found an easy way to 
tell if your MySQL installation is ready for the new daylight savings 
time.


SELECT UNIX_TIMESTAMP('2007-03-11 02:00:00'),  
UNIX_TIMESTAMP('2007-03-11 03:00:00');


This should return the same value, even though you are feeding it 
different times, because this is when the 1 hr change occurs.  I get 
the correct result on both of my machines.  On one of them I've run 
the suggested |mysql_tzinfo_to_sql command, on the other, the time 
zone tables are completely empty!


Any wisdom on these time zone tables - are they ever used, should I 
populate them or not?


-Ryan







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch - easy check

2007-02-24 Thread Ryan Stille

Ryan Stille wrote:

Paul DuBois wrote:

At 4:40 PM -0600 2/20/07, Ryan Stille wrote:
Is there an easy way to test to see if MySQL already has the proper 
tables loaded?


-Ryan


Yes, reload them. :-)  After that, they're current! ...



After digging around on the net for a while I found an easy way to tell 
if your MySQL installation is ready for the new daylight savings time.


SELECT UNIX_TIMESTAMP('2007-03-11 02:00:00'),  
UNIX_TIMESTAMP('2007-03-11 03:00:00');


This should return the same value, even though you are feeding it 
different times, because this is when the 1 hr change occurs.  I get the 
correct result on both of my machines.  On one of them I've run the 
suggested |mysql_tzinfo_to_sql command, on the other, the time zone 
tables are completely empty!


Any wisdom on these time zone tables - are they ever used, should I 
populate them or not?


-Ryan

|


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch

2007-02-23 Thread Ryan Stille

Ryan Stille wrote:

Paul DuBois wrote:

At 4:40 PM -0600 2/20/07, Ryan Stille wrote:
Is there an easy way to test to see if MySQL already has the proper 
tables loaded?


-Ryan


Yes, reload them. :-)  After that, they're current! ...



My timezone tables appear to be empty.  At least the time_zone_name 
and time_zone_transition tables are for sure.


I was under the impression I needed to update these tables, but if its 
working fine without them, then. it must be looking to the OS for 
timezone info?


-Ryan



Probably should have included some more info about my setup. I'm on 
4.1.20-1 on Linux.


-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch

2007-02-23 Thread Ryan Stille

Paul DuBois wrote:

At 4:40 PM -0600 2/20/07, Ryan Stille wrote:
Is there an easy way to test to see if MySQL already has the proper 
tables loaded?


-Ryan


Yes, reload them. :-)  After that, they're current! ...



My timezone tables appear to be empty.  At least the time_zone_name and 
time_zone_transition tables are for sure.


I was under the impression I needed to update these tables, but if its 
working fine without them, then. it must be looking to the OS for 
timezone info?


-Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Daylight Savings Time Patch

2007-02-20 Thread Ryan Stille
Is there an easy way to test to see if MySQL already has the proper 
tables loaded?


-Ryan

Paul DuBois wrote:

At 4:17 PM -0600 2/20/07, Paul DuBois wrote:

At 4:36 PM -0500 2/20/07, Sun, Jennifer wrote:

Any answers for the question below ?

Is there a DST patch for MySql 4.0.20?   Thanks.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Friday, February 09, 2007 9:30 AM
To: mysql@lists.mysql.com
Subject: MySQL Daylight Savings Time Patch

Is there a DST patch for MySQL 4.0.x series?

I've been getting scary emails from our sys and net admins about
impending
doom.

Thanks,

David


Before MySQL 4.1.3, the server gets its time zone from the operating 
system

at startup.  The time zone can be specified explicitly by setting the TZ
TZ environment variable setting, or by using the --timezone option to 
the

mysqld_safe server startup script.

Assuming that the server host itself has had its operating system 
updated

to handle the new Daylight Saving Time rules, that should be all that's
necessary for MySQL to know the correct time.


I should mention also:

For those of you running 4.1.3 or later, to get your MySQL server to
know about the new DST rules, you should make sure your OS is updated
with the new zoneinfo files, and then reload those files into MySQL
with mysql_tzinfo_to_sql.  See:

http://dev.mysql.com/doc/mysql/en/time-zone-support.html

Particularly the Note in the middle of the page and the last few 
paragraphs.


You may have previously loaded your system's zoneinfo files into MySQL,
but when those zoneinfo files are updated, the changes do not 
automatically
propagate to MySQL's time zone tables.  You must reload the tables to 
update

them.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: select on multiple fields in several tables?

2007-02-04 Thread Ryan Stille

boll wrote:

Hello-

I'm working with an unfamiliar application, trying to figure out where 
my data is going.


Is it possible to form a query to select from all the columns in 
several tables at once? Something equivalent to:
   SELECT FROM * WHERE * = 'john smith'; If that's not possible I'd 
still like to be able to search each table at once without specifying 
the column names:

   SELECT FROM   WHERE * = 'john smith';
If that can't be done, can a sub query find the column names to be 
selected on?


Basically, I'm trying to avoid doing separate selects on each column 
in each table.


This is using MySQL 5.0.27 .
Thanks for any suggestion or alternative methods!

-John



SELECT * FROM table1, table2, table3 WHERE table1.name = 'john smith' 
AND table2.city = 'Dallas'


If the tables have fields named the same, this may not work very well.  
Then you'd need to list out the column names and alias some of them so 
they come out with different names.


I don't think there is a way to wildcard the column names like you are 
asking, I can't imagine a case where you'd need to do that either.  When 
would every column in your table have the same value? (john smith in 
your example).


You can programmatically get a list of columns in a table, then when you 
are using Perl or PHP or whatever to build your query string, you can 
loop through the columns to list them.


-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: making graphs with MySQL data

2007-01-12 Thread Ryan Stille

C.R.Vegelin wrote:

Dear List,

I have a MySQL database (V5.0.x) and I need to make graphs.
Does anyone know about good utilities to make graphs ?
I would appreciate your expertise or links.

TIA, Cor
  


Perl with GD?  MS Access/MyODBC could do it too.  You may have to be 
more specific.


-Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: question on create_tmp_table_priv

2006-11-14 Thread Ryan Stille

Marten Lehmann wrote:
that is the sort of thing that I expected and was afraid of. Also, I 
think the mysql documentation doesn't really point out that temporary 
tables are always kept in memory. Couldn't mysql create them in 
/tmp/. instead?
Actually I was just reading through the docs about this the other day.  
It said temp tables are always created in memory, but can be moved to 
disk if they get too large, or will be created on disk initially if the 
table contains a certain type of field (maybe text or binary, can't 
remember).


-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB does not preserve AUTO_INCREMENT -- WTF!?

2006-11-10 Thread Ryan Stille
I came up with a work around when we encountered this.  I don't remember 
exactly (and I don't have access to that code anymore), but I think we 
manually put a piece of code in our SQL setup scripts, before any of our 
insert statements.  This 'mysql command' would set the next available ID 
to whatever we wanted, its just that if you mysqldump the database back 
out, then load it back in, this next ID setting won't be remembered.  So 
we had to add it to our dump/setup script each time.  You'd have to look 
in the docs to find what command does this, its something for setting 
the next auto increment id.  Hope this helps.


-Ryan

Daevid Vincent wrote:

http://dev.mysql.com/doc/refman/5.0/en/innodb-auto-increment-column.html

We have recently switched several database tables from MYISM to INNODB, only
to find out this colossal design flaw in InnoDB tables.

We ship out mySQL on our appliances in enterprise level scenarios. We often
like to start the AUTO_INCREMENT for several tables at 10,000 -- this way we
can reserve the lower 'block' of IDs for our own internal and 'default' use
so all customers have the same basic database schema. It also makes our code
easier to write as we can, in one easy swoop, make the ID's of any record <
1 immutable, uneditable, etc.

Are there plans to fix what we perceive to be a tremendous bug? Why would
you make it so that the AUTO_INCREMENT value is stored in memory only?! What
use is that? I would have preferred if AUTO_INCREMENT threw an error on
InnoDB tables, this way we would have known this months ago instead of now
that we're well into this porting of tables, schema and now code. 


This is such a subtle but significant change to the table-type that it
should have been made painfully obvious to anyone trying to switch types.
You are costing us many man-hours now of re-architecting existing code, and
trying to figure out a way to upgrade existing customers. 


...And yes, we *DO* pay you your $10,000 support contract.

*sigh*

This is very disappointing.


  




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL on a ram disk?

2006-11-08 Thread Ryan Stille

Maybe I should just be looking at using a HEAP table?

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL on a ram disk?

2006-11-08 Thread Ryan Stille
We have an intense data process that runs every few minutes, clearing 
and then loading a database with thousands of records of data, which are 
then queried on from a website.  The periodic load takes about 20 
seconds.  Some of the front end select queries take a second or two.  
This is all running on MSSQL, and we are wondering if we could find any 
speed improvements by moving to MySQL, possibly running on a ram disk.   
Any input appreciated.


Thanks,
-Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: See how far the slave has replicated from master

2006-10-24 Thread Ryan Tracey

Hi Dominik


is there a way to see how far the slave has replicated from the master
machine?

I know I can issue show slave status on slave, but is there any command
on the MASTER to see information on what the slave has read already?


I don't think that the master has any idea of what the slave(s) is/are
doing. The master only writes stuff to the binary log and leaves it at
that.

Anyway, that's my understanding.

Cheers,
Ryan

--
Ryan Tracey
Citizen: The World

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Very large from

2006-03-15 Thread Ryan Stille
> My current thinking is to break the entering of the property
> info into 3 separate forms, the first with the mandatory info
> does a insert to create the row with the remaining 100
> columns being seeded with $t=''. Group 2 & 3 will them be a
> update to the row using the 'set' keyword just touching the
> columns under their separate control.
> 
> 
> Does this sound like a workable solution or is there some
> other approach that is better suited to my application?
> 

If you go this route I would set some kind of 'finalized' flag when the
user completes the third group (even if they left everything in this
group blank).  This way you can easily clear out abandoned
registrations, where the user completed step 1 but then bailed out for
some reason.

But I doubt you'll have a ton of users entering in property at one time,
I think you even said most of your traffic would be browsing and reading
the listings.  So I don't think it would be a problem to build up all
the data in a session and then write it all at once at the end.

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to Log Warnings and Errors from queries

2006-03-13 Thread ryan lwf
Hi Rithish,

Thank you all for your suggestion, I would definitely give it a shot.

Regards,
Ryan.

On 3/13/06, Rithish Saralaya <[EMAIL PROTECTED]> wrote:
>
> Hello Ryan. I am more of a developer than a MySQL administrator. Hence, I
> would always favour applications logging query errors rather than being
> dependent on MySQL to generate a log for me. Of course, I may be wrong.
>
> You could write a query execution function, say exec_mysql_query(...) in
> one
> of you files, say 'Db.inc' and have it included in all your files.
>
> exec_mysql_query(...) will log all mysql errors into a file. and you may
> provide an web-interface (assuming this is a web application) to
> view/download the log files.
>
> Regards,
> Rithish.
>
>
>
> -Original Message-
> From: ryan lwf [mailto:[EMAIL PROTECTED]
> Sent: Friday, March 10, 2006 8:14 PM
> To: Dan Nelson
> Cc: mysql@lists.mysql.com
> Subject: Re: How to Log Warnings and Errors from queries
>
>
> Hi Dan,
>
> Noted with thanks.
>
> As such, is there a workaround to log problematic sql queries ran against
> the mysqld server ?  Do I need to write  separate script to do this ?
>
> Regards,
> Ryan.
>
> On 3/10/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
> >
> > In the last episode (Mar 08), ryan lwf said:
> > > I understand that the option log-errors and log-warnings only logs
> > > server related internal errors.  How do I enable logging errors from
> > > queries executed, so that I can fix the problematic query statement
> > > accordingly?
> > >
> > > The statement "SHOW WARNINGS" and "SHOW ERRORS" does not work on my
> > > server with mysqld-4.0.25 binary version.
> >
> > Those commands appeared in MySQL 4.1.  Before then, warnings were
> > simply counted.
> >
> > --
> >Dan Nelson
> >[EMAIL PROTECTED]
> >
>
>


Re: How to Log Warnings and Errors from queries

2006-03-10 Thread ryan lwf
Hi Dan,

Noted with thanks.

As such, is there a workaround to log problematic sql queries ran against
the mysqld server ?  Do I need to write  separate script to do this ?

Regards,
Ryan.

On 3/10/06, Dan Nelson <[EMAIL PROTECTED]> wrote:
>
> In the last episode (Mar 08), ryan lwf said:
> > I understand that the option log-errors and log-warnings only logs
> > server related internal errors.  How do I enable logging errors from
> > queries executed, so that I can fix the problematic query statement
> > accordingly?
> >
> > The statement "SHOW WARNINGS" and "SHOW ERRORS" does not work on my
> > server with mysqld-4.0.25 binary version.
>
> Those commands appeared in MySQL 4.1.  Before then, warnings were
> simply counted.
>
> --
>Dan Nelson
>[EMAIL PROTECTED]
>


How to Log Warnings and Errors from queries

2006-03-09 Thread ryan lwf
Hi all,

I understand that the option log-errors and log-warnings only logs server
related internal errors.  How do I enable logging errors from queries
executed, so that I can fix the problematic query statement accordingly?

The statement "SHOW WARNINGS" and "SHOW ERRORS" does not work on my server
with mysqld-4.0.25 binary version.

Any inputs are appreciated.

Thanks,
Ryan.


RE: Charset questions

2006-03-09 Thread Ryan Stille
Are my emails not coming through?  Or is this question way too 'newbie'
and no one wants to touch it?

-Ryan

Ryan Stille wrote:
> I'm still hoping someone can else can share their input on this.
> 
> What do other people usually do as far as the collation setting?
> 
> Thanks,
> -Ryan
> 
> Ryan Stille wrote:
>> When we migrated to MySQL from MS SQL, I left everything set to the
>> default as far as collations - latin1_swedish_ci.
>> This was based on digging through the manual and google. But now I am
>> migrating the application to a newer version of ColdFusion and am
>> running into some issues with charsets on some ColdFusion functions.
>> I am wondering if my database charset has anything to do with it.
>> 
>> What do other people usually do as far as the collation setting?
>> 
>> We are in the US, but do have a few sites that make use of German
>> and Spanish characters. 
>> 
>> Thanks,
>> -Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Charset questions

2006-03-08 Thread Ryan Stille
I'm still hoping someone can else can share their input on this.

What do other people usually do as far as the collation setting?

Thanks,
-Ryan

Ryan Stille wrote:
> When we migrated to MySQL from MS SQL, I left everything set
> to the default as far as collations - latin1_swedish_ci.
> This was based on digging through the manual and google.  But
> now I am migrating the application to a newer version of
> ColdFusion and am running into some issues with charsets on
> some ColdFusion functions.  I am wondering if my database
> charset has anything to do with it.
> 
> What do other people usually do as far as the collation setting?
> 
> We are in the US, but do have a few sites that make use of
> German and Spanish characters.
> 
> Thanks,
> -Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How can I observe mysqld?

2006-03-07 Thread Ryan Stille
> What else can I do to observe myslqd?

mytop may be of some use: http://jeremy.zawodny.com/mysql/mytop/

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Charset questions

2006-03-03 Thread Ryan Stille
When we migrated to MySQL from MS SQL, I left everything set to the
default as far as collations - latin1_swedish_ci.  This was based on
digging through the manual and google.  But now I am migrating the
application to a newer version of ColdFusion and am running into some
issues with charsets on some ColdFusion functions.  I am wondering if my
database charset has anything to do with it.

What do other people usually do as far as the collation setting?

We are in the US, but do have a few sites that make use of German and
Spanish characters.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Deleted sock file, how can I recover

2006-03-03 Thread Ryan Stille
Shut it down with mysqladmin, from another machine using the -h (host)
option.  

When you are using the CLI locally it connects through the socket, but
if you do it from another machine it will use TCP, getting around the
need for the missing socket file.

You may also be able to connect via TCP from the local machine,
depending on what address you use to connect to. (local IP, local host,
actual hostname, etc).  

-Ryan


Scott Haneda wrote:
> ERROR 2002: Can't connect to local MySQL server through
> socket '/tmp/mysql.sock'
> 
> I accidentally deleted the above file, I can not connect any
> new connections in the CLI, old stuff seems to still be ok.
> 
> I can not seem to shut down the server, as that file is needed.
> 
> Any way I can recreate it, and what do I put in it?
> 
> I can force kill the server and I am guessing it would all
> come back up ok, but there must be a gentler way?
> --
> -
> Scott HanedaTel: 415.898.2602
> <http://www.newgeo.com> Novato, CA U.S.A.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: manual deletion of binary log files

2006-03-03 Thread Ryan Stille
Yes, look at the PURGE LOGS command.

Rithish Saralaya wrote:
> Hello.
> 
> Can I delete off the binary log files manually? I do not want
> to 'RESET MASTER', as it will clear all the binary logs, and
> that's not what I want to do. The database is backed up every
> midnight, and I wouldn't want to keep any of the bin logs except the
> latest one. 
> 
> We are on RHEL - MySQL 4.1.11 - INNoDB storage engine.
> 
> Regards,
> Rithish.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Want mysql to return tablename.fieldname format

2006-03-02 Thread Ryan Stille
The query has a number of joins and it was selecting * from all the
tables, so it was not apparent where each 'id' field was from.  It was
just a poorly written query all around. I've fixed the problem by
narrowing down the number of fields it selects and giving the ambiguous
fields specific names (ads.id as adid).

Thanks,
-Ryan

Rhino wrote:
> I don't understand what you want. If you have the original
> query, it should be apparent from it where each 'id' column
> originated. If you're not sure how to read the query, post it
> and we can help you figure out which table provided each 'id' column.
> 
> --
> Rhino
> 
> - Original Message -
> From: "Ryan Stille" <[EMAIL PROTECTED]>
> To: 
> Sent: Thursday, March 02, 2006 12:42 PM
> Subject: Want mysql to return tablename.fieldname format
> 
> 
> I am working with an existing compilcated query someone wrote
> years ago.
> When I dump the data from the query to try to figure out why
> I'm getting
> unexpected data, I have three fields named "id".  Is there anyway to
> tell mysql to name the fields with the table name when they are
> returned, so they show up as ads.id, track.id, etc?
> 
> -Ryan



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: create/restore database without binary logging

2006-03-02 Thread Ryan Stille
Put "SET SQL_LOG_BIN=0" at the top of your dump file.  That will turn
off logging just for your session.

-Ryan
 

> -Original Message-
> From: sheeri kritzer [mailto:[EMAIL PROTECTED] 
> Sent: Thursday, March 02, 2006 11:56 AM
> To: Rithish Saralaya
> Cc: MySQL general mailing list
> Subject: Re: create/restore database without binary logging
> 
> Rithish,
> 
> It's possible to do this by turning off binary logging, 
> restarting the server, importing, turning on binary logging, 
> and restarting.
> 
> -Sheeri
> 
> On 2/27/06, Rithish Saralaya 
> <[EMAIL PROTECTED]> wrote:
> > Hello.
> >
> > I was going to recreate a database of size 35 GB from sql 
> dump file. 
> > Wanted to know if it is possible to do it without mysql 
> writing into 
> > the binary log. If yes, how?
> >
> > Regards,
> > Rithish.
> >
> >
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Want mysql to return tablename.fieldname format

2006-03-02 Thread Ryan Stille
I am working with an existing compilcated query someone wrote years ago.
When I dump the data from the query to try to figure out why I'm getting
unexpected data, I have three fields named "id".  Is there anyway to
tell mysql to name the fields with the table name when they are
returned, so they show up as ads.id, track.id, etc?

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: User is rejected because of IP, but hostname is allowed

2006-02-17 Thread Ryan Stille
>>> did u start mysql with --skip-name-resolve ???
>>> Kishore Jalleda 
 
Kim Christensen wrote:
> On 2/16/06, Ryan Stille <[EMAIL PROTECTED]> wrote:
>> Nope.
> 
> Well, have you tried it? Did it solve your problem?

Sorry, I misunderstood.  I thought Kishore was saying that if I had
started the server with that option, that could be the cause of my
problem.  I'd really rather not restart the server, as it is a
production system.  I can schedule to have it done if I must.

After looking in the manual I don't know if the --skip-name-resolve
option would help me.  It says: "Hostnames are not resolved. All Host
column values in the grant tables must be IP numbers or localhost." - I
don't want to specify IP's in my grant tables, I want to use a wildcard
at my domain: %.willcomminc.com.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: User is rejected because of IP, but hostname is allowed

2006-02-16 Thread Ryan Stille
Nope.


  _  

From: Kishore Jalleda [mailto:[EMAIL PROTECTED] 
Sent: Thursday, February 16, 2006 4:32 PM
To: Ryan Stille
Cc: mysql@lists.mysql.com
Subject: Re: User is rejected because of IP, but hostname is
allowed


did u start mysql with --skip-name-resolve ???
 
Kishore Jalleda 
 



User is rejected because of IP, but hostname is allowed

2006-02-16 Thread Ryan Stille
I am trying to connect to my MySQL 4.x server from a new machine.  I am
getting this error:

Got error: 1045: Access denied for user 'user'@'192.168.2.56' (using
password: YES) when trying to connect

BUT I have a user setup in mysql with allowed access from
'%.willcomminc.com'.  The machine I'm trying to connect from is named
dbdev.willcomminc.com.  I did a reverse lookup from the database server
and the IP 192.168.2.56 does indeed lookup to dbdev.willcomminc.com.

Why is MySQL ignoring the hostname and using the IP?

I did google for this first but didn't come up with anything.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: data entry GUI

2006-02-01 Thread Ryan Stille
You can also install MyODBC and then hook an Excel spreadsheet into your
database.  Editing the spreadsheet will update data in your database.
This isn't a good solution if you are going to be creating new tables
often.  But for manipulating data in a known set of tables it's great.

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Converting a date/time field to UTC

2006-01-16 Thread Ryan Stille
I have a timestamp stored in a datetime field.  It gets set using Now()
when the record was inserted.  I would like to pull the data back out
and have it in UTC time.  Is there an easy way to do this?  I've been
through the manual and on google but haven't come up with anything.
This is in a PHP script I'd like other people to be able to use, so I'd
like it to easily run on any MySQL 4.1.x system.  Maybe it would just be
easiest if I did the UTC conversion in PHP?  The only other thing I was
thinking of is getting the difference between the regular time and
UTC_TIME and then add that to the field value.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Backups

2006-01-16 Thread Ryan Stille
> Furthermore, would it make more sense to have the data dump locally,
> and then use a script to move the contents of the dump to a machine
> on the network, perhaps even to a machine located on an alternate
> network accessed via a second ethernet card?

This would be the simplest approach.  That is what we are currently
doing.  Are you using InnoDB tables or MyISAM databases or both?

If InnoDB, dump your data with the --single-transaction option.  It
should make the dump process run in a separate transaction, which will
allow reads and write to continue to take place.

If MyISAM, there are no real good options for getting a clean dump,
other than to lock all the tables and block your users out during that
time.  You can use the --lock-tables option but it does not work as well
as --single-transaction for InnoDB databases.

If

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.1.16-1 RPM's

2006-01-01 Thread Ryan
Not sure if this is a PHP or MySQL issue, but I am having problems
re-compiling PHP 5.0.5 after upgrading to the 4.1.16-1 set of MySQL RPM's
which were posted on Dec 25, 2005.  I previously had version 4.1.16-0 which
worked fine.  It appears to be a zlib conflict of some sort as I am
compiling PHP with '--with-mysqli=/usr/bin/mysql_config' and '--with=zlib'.
After I install MySQL-devel-4.1.16-1.glibc23.i386.rpm, during the PHP
compilation process, I get the error:

grep: /usr/lib/mysql/libz.la: No such file or directory
/bin/sed: can't read /usr/lib/mysql/libz.la: No such file or directory
libtool: link: `/usr/lib/mysql/libz.la' is not a valid libtool archive

I also tried the MySQL-devel-standard-4.1.16-1.rhel3.i386.rpm package and
with that, I get errors:

/usr/lib/mysql/libz.a(adler32.o)(.text+0x0): In function `adler32':
: multiple definition of `adler32'
/usr/lib/mysql/libz.a(adler32.o)(.text+0x0): first defined here
/usr/lib/mysql/libz.a(adler32.o)(.text+0x250): In function
`adler32_combine':
: multiple definition of `adler32_combine'
/usr/lib/mysql/libz.a(adler32.o)(.text+0x250): first defined here
/usr/lib/mysql/libz.a(compress.o)(.text+0x0): In function `compress2':
: multiple definition of `compress2'
/usr/lib/mysql/libz.a(compress.o)(.text+0x0): first defined here
...

Did something change in the 4.1.16-1 RPM's in regards to zlib?  As I
mentioned above, 4.1.16-0 worked fine.

Thanks,
Ryan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Unable to reference right join table in left join statement under windows...but works under linux

2005-12-19 Thread Ryan Allbaugh
I am using MySQL 5.0.15 on windows and cannot run this query:

SELECT a.*,b.name, c.fullname,d.fullname
FROM access_authorization a, building b
LEFT JOIN users c ON a.createdby=c.id
LEFT JOIN users d ON a.modifiedby=d.id
WHERE a.sortcode=b.sortcode AND
  a.sortcode like '1,2,1,6%' LIMIT 0, 25

I receive:
ERROR 1054 (42S22): Unknown column 'a.createdby' in 'on clause'

But this query DOES work under my linux mysql 5.0.0-alpha!

I can modify the SQL Statement to the following and it works fine:

SELECT a.*,b.name, c.fullname,d.fullname FROM access_authorization a
LEFT JOIN users c ON a.createdby=c.id
LEFT JOIN users d ON a.modifiedby=d.id
RIGHT JOIN building b ON a.sortcode=b.sortcode
WHERE a.sortcode like '1,2,1,6%' LIMIT 0, 25


But I have a lot of SQL statements like this and I do not want to have
to modify them all.  Does anyone have any ideas on what is wrong?

I've been able to reproduce the problem with some generic tables, so I
wont include my table definitions here.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql compile error - fixed

2005-11-23 Thread Ryan Stille
I fixed the whole problem by typing this one line:
`ln -s libz.so.1.1.3 /usr/lib/libz.so`

Linux is such a pain sometimes.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysql compile error - more info

2005-11-23 Thread Ryan Stille
I thought I would get around the whole thing by just installing the RPM
instead of compiling.  So I downloaded MySQL-devel-4.1.14-0.i386.rpm and
installed it.  Then I went to install the DBD-mysql module and got this
error during the compile:

/usr/bin/ld: cannot find -lz
collect2: ld returned 1 exit status

An error occurred while linking the DBD::mysql driver. The error
message seems to indicate that you don't have a libz.a,
or a libz.so. This is typically resolved by:

1.) You may try to remove the -lz or -lgz flag from the libs list
by using the --libs switch for "perl Makefile.PL".
2.) On Red Hat Linux install libz-devel
3.) On other systems, please contact the mailing list

 [EMAIL PROTECTED]

For further hints, see INSTALL.html, section Linker flags.
make: *** [blib/arch/auto/DBD/mysql/mysql.so] Error 1 


So I figured out how to run mysql_config --libs and get that value, then
put it on the perl Makefile.PL line without the -lz:
perl Makefile.PL --libs="-L/usr/lib/mysql [the -lz was right here]
-lmysqlclient -lcrypt -lnsl -lm -lc -lnss_files -lnss_dns -lresolv -lc
-lnss_files -lnss_dns -lresolv"

Then I was able to compile without error.  So I installed the driver.
But when I try to use it to connect to mysql I get this error:

install_driver(mysql) failed: Can't load
'/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBD/mysql/mysql.so' for
module DBD::mysql:
/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBD/mysql/mysql.so:
undefined symbol: compress at
/usr/lib/perl5/5.00503/i386-linux/DynaLoader.pm line 169.

 at (eval 1) line 3
Perhaps a required shared library or dll isn't installed where expected
 at /home/wms/test1.pl line 8



Any help appriciated.  I apologize for this message being formatted
badly, there's no obvious break between my comments and my pasted error
messages.  I hate Outlook with a passion but am forced to use it.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql compile error

2005-11-23 Thread Ryan Stille
I'm getting ready to do a roll out next week on converting an
application from MySQL to MSSQL.  I'm installing the perl drivers on the
production machines today.  The perl DBD driver requires mysql to be
installed.  Everything's been going fine on all the machines until I got
to the last one.  On that one I am getting this error when compiling
mysql:


Making all in isam
make[2]: Entering directory `/usr/src/mysql-4.1.14/isam'
source='open.c' object='open.o' libtool=no \
depfile='.deps/open.Po' tmpdepfile='.deps/open.TPo' \
depmode=gcc /bin/sh ../depcomp \
gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../include -I../include -O3
-DDBUG_OFF-c open.c
open.c: In function `nisam_open':
open.c:136: `F_TO_EOF' undeclared (first use in this function)
open.c:136: (Each undeclared identifier is reported only once
open.c:136: for each function it appears in.)
make[2]: *** [open.o] Error 1
make[2]: Leaving directory `/usr/src/mysql-4.1.14/isam'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/src/mysql-4.1.14'
make: *** [all] Error 2


Here is what I used for the ./configure command:
./configure --without-server \
--prefix=/usr/local/mysql \
--exec-prefix=/usr/local/mysql \
--with-named-curses-libs=/usr/lib/libncurses.so.4 \
--enable-thread-safe-client \
--with-extra-character-sets=complex

Any ideas on what's wrong?  I am installing verion 4.1.14, and the
server is running RedHat 6.2.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



timestamp

2005-11-16 Thread Ryan Escarez
is it possible to get the the given (unix)timestamp in milliseconds
since the epoch?


tia!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



UNIX timestamp with microseconds

2005-11-16 Thread Ryan Escarez
is possible to get UNIX timestamp with microseconds

when i try the following it just give 10 digits

mysql>SELECT unix_timestamp('20051114095641'+ INTERVAL 0 HOUR) as ts;

output :

1131933401 <--- 10 digits

any tips?

tia!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Terrible MySQL Administrator

2005-10-20 Thread Ryan Escarez
On 10/20/05, Rich <[EMAIL PROTECTED]> wrote:
> Hi folks.
>
> Since I installed MySQL Administrator on OS X and tried to put a schedule in
> for a backup, the thing simply won't go away.  It sends an email into my
> mail app in the terminal on a daily basis and it's driving me
> mad...slowly...bit by bit.
>
> Can someone suggest how I get rid of these emails, as the cron was deleted
> weeks ago, and I took out the entry in a crontab I found, but it still
> ...persists...over...and...over.


have you check the crontab of every individual users?
i suggest you post the email header here, so anyone can help you further.

hth

--
Ryan Escarez
CEBB F1E4 1E39 EC48 F05D  6B72 9C11 DD88 5E39 E471
$ gpg --keyserver pgp.mit.edu --recv-keys 0x5E39E471
main(k){float i,j,r,x,y=-16;while(puts(""),y++<15)
for(x=0;x++<84;putchar(" .:-;!/>)|&IH%*#"[k&15]))
for(i=k=r=0;j=r*r-i*i-2+x/25,i=2*r*i+y/10,j*j+i*i<11&&k++<111;r=j);}

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Problem compiling MyODBC - looking for sql.h file

2005-10-11 Thread Ryan Stille
I posted this on the ODBC list, but there is not much traffic there and
no one replied.  Hopefully someone here has dealt with this.

I am setting up ColdFusion to access a MySQL 4.1.x database.  This
required new MyODBC drivers to be installed on the system for
ColdFusion.  After a lot of trial and error, I got it figured out.  I
install mysql on the system (using --without-server), then install
MyODBC.  Then copy the libmyodbc3.so file to a directory under
ColdFusion, and change ColdFusion's odbc.ini file to reference this new
driver.

I am now trying to do it on a second system, but am getting errors
during the ./configure of MyODBC (3.5.11).  This is my configure
statement:

./configure --with-mysql-path=/usr/local/mysql \
--without-samples \
--disable-test \
--enable-thread-safe

And this is the resulting output:

...
ODBC DRIVER MANAGER CONFIGURATION - LIBRARIES AND HEADERS
checking for isql... No
checking for unixODBC version... Unknown
checking sql.h usability... No
checking sql.h presence... No
checking for sql.h... no
configure: error: Unable to find the unixODBC headers in
'/usr/local//include'


It's complaining about unixODBC headers, maybe specifically the sql.h
file in the second to last line?  But I don't have unixODBC installed on
the FIRST system, the one that is working just fine.  (ColdFusion has
it's one built in ODBC manager, unixODBC is not necessary.)  However
there is a /usr/local/include/sql.h file on that system.  I don't know
how it got there.  By looking inside the file it appears to be related
to MyODBC 3.5.11.

Just for kicks I copied this file to the second system, but got the same
error when trying to run ./configure.

Any ideas?

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Detect if table exists from within MySQL?

2005-10-06 Thread Ryan Stille
> Maybe you could use
> 
> SHOW TABLES LIKE 'your_table';

That's a great idea, I just tried it in several ways, like:

IF EXISTS (SHOW TABLES LIKE 'cfgbiz') THEN
   SELECT siacnotifyto FROM cfgbiz
ELSE
   SELECT '' as siacnotifyto
END IF;

-and- 

select IF((SHOW TABLES LIKE 'cfgbiz'),notifyto,'') FROM cfgbiz;

But it looks like the SHOW TABLES statement just doesn't return like a
regular SELECT statement does, because the above works if I use it like
this:

select IF(1,notifyto,'') FROM cfgbiz;

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Detect if table exists from within MySQL?

2005-10-06 Thread Ryan Stille
> If 'SHOW COLUMNS FROM tablename' returns error 1146 (42S02), the
> table doesn't exist. 

This causes my application (ColdFusion) to throw an exception.

If I have to, I could resort to doing another query in my application
(SHOW TABLES) and seeing if my table was returned in that list.  But I
was hoping for a more elegant way to do it, within the single query.

-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Detect if table exists from within MySQL?

2005-10-06 Thread Ryan Stille
I am converting some code from MSSQL to MySQL.  In one place I need to
have a conditional query depending on if a table exists or not.  There
are different versions of this application and the table only exists in
some of them.  Here is how it was done in MSSQL:

IF OBJECT_ID('cfgbiz') IS NOT NULL
  SELECT notifyto FROM cfgbiz
ELSE
  SELECT '' as notifyto

Is there something similar in MySQL? I am running version 4.1.x.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread Ryan Stille
> OK, you confused me. How can 16 be bigger than 16? For character-based
> fields, the number in the parentheses shows the length of the field,
> does it not? The first part, the "text" or "char" or "varchar" or ...,
> tells the engine what kind of data a field will contain. 

> How can TEXT(16) hold more data than VARCHAR(16) ? 

I'm not sure how the (16) is used on a TEXT field.  MS SQL has TEXT
fields and VARCHAR fields.  A VARCHAR(16) in MSSQL would be the same as
a VARCHAR(16) in MySQL.   But in my MSSQL database I have data with
thousands of characters in a TEXT(16) field.

-Ryan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Intelligent Converters product: MSSQL-to-MySQL

2005-09-13 Thread Ryan Stille
Josh Chamas wrote:
> Ryan Stille wrote:
>> Has anyone ever used this MSSQL-to-MySQL converter?  It's pretty
>> reasonable at $40, and the demo output I got looked pretty good.  But
>> I wanted to see if there is anything I should be weary about.
>> http://www.convert-in.com/mss2sql.htm
>> 
> 
> I have done a test migration before with it and it seemed fine.
> 
> I would also stay tuned for our MySQL Migration Toolkit to
> support MSSQL migrations.  You can find it here:
> 
>http://www.mysql.com/products/tools/migration-toolkit/

I did go ahead and purchase the product. It worked pretty well, I had to
modify the output a little.  When exporting the data using this tool, it
added ALTER TABLE lines that changed the DATETIME fields to DATE fields.
I guess it saw that the time for all the dates was 0:00 and thought this
information was not necessary.

I did try the MySQL Migration Toolkit.  Some of the definitions it
generated were incorrect.  One thing I remember for sure is that it was
turning my TEXT(16) fields (which are pretty long) into VARCHAR(16),
which is pretty short.  But I understand it is still Alpha software.

Thanks,
-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Questions about last_insert_id() instead of @@IDENTITY

2005-09-07 Thread Ryan Stille
> it might be best to ask this in the cf forums, however did
> you try to separate the statements with a ; ? I haven't tried
> 2 query statements for MySQL in cfquery...yet ;-) Also I
> believe MySQL 3.23+ supports @@identity

Yes I've tried semicolons after the statements, and I've tried setting
autocommit = 0 at the beginning also.

I will ask on a CF list, because they may have run into this issue.  But
really I think it's a MySQL question.

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Questions about last_insert_id() instead of @@IDENTITY

2005-09-07 Thread Ryan Stille
I am migrating from MS SQL Server to MySQL 4.1.

I have code in my ColdFusion/SQL Server application that went something
like this:

SET NOCOUNT ON
INSERT INTO TABLE ()
SELECT @@IDENTITY AS adid
SET NOCOUNT OFF


That whole block was processed in coldfusion as a single query.  I gave
it a name of "insertquery", and I can get the ID back as
insertquery.adid.

I am having trouble getting the same functionality with MySQL.

INSERT INTO TABLE (...)
SELECT LAST_INSERT_ID() as adid


I get this MySQL error:
You have an error in your SQL syntax; check the manual that corresponds
to your MySQL server version for the right syntax to use near 'INSERT
INTO rps_names (nameid, name) VALUES (NULL, "Ryan Smithland") SELECT LA'
at line 2

With MSSQL the trick was the NO COUNT.  That told the database server
not to return a count of the rows affected.  Is there something similar
I need to do in MySQL to get this to work?

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Connecting using MyODBC

2005-09-02 Thread Ryan Stille
I am trying to connect to MySQL from within ColdFusion on a linux box.
ColdFusion (version 5) came with a Merant driver, but it is too old to
support transactions.

I've downloaded libmyodbc_mysql.so (MyODBC 2.5) and also libmyodbc3.so
(MyODBC 3.5) and could not get either of these to work.  But I'm not
sure if I'm using them correctly.

Basically all I've done is copy these files to my machine and change
this line in ColdFusion's odbc.ini file:
Old: Driver=/opt/coldfusion/lib/CFmysql15.so
New: Driver=/opt/coldfusion/lib/libmyodbc_mysql.so

Does this sound like I'm using them correctly?  Most of the
documentation I find about MyODBC talks about installing an odbc
manager, I believe ColdFusion already has one built in, since the
odbc.ini file already exists and is used.

When I use ODBC 3.5 my server just hangs.  When I use the 2.5 driver I
get this error:
ODBC Error Code = IM003 (Specified driver could not be loaded)

Thanks for any help.
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Intelligent Converters product: MSSQL-to-MySQL

2005-08-25 Thread Ryan Stille
Has anyone ever used this MSSQL-to-MySQL converter?  It's pretty
reasonable at $40, and the demo output I got looked pretty good.  But I
wanted to see if there is anything I should be weary about.
http://www.convert-in.com/mss2sql.htm

Thanks,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqldump Problem

2005-08-18 Thread Ryan Stille
Carlos J Souza wrote:
> Sirs,
> 
> When i use  mysqldump on Version 4.1.x, all tables had a one
> record insert generated in script. When i use mysqldump on a
> 4.0.x version this problem does not occurs.
> 
> How to solve it?

Not 100% sure what you are looking for, but try using "--skip-extended-insert".

-Ryan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Backups on high-availability servers

2005-07-22 Thread Ryan Stille
I would really like to hear how some of you are handling backups on
high-availability servers.  The DBA in my company is skeptical about
switching from MSSQL Server to MySQL, this is one of his reasons
(backups).  If someone is making MySQL work in a high-availabity
environment, let's hear about it!

Thanks for any input,
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Questions about backups, InnoDB tables, etc.

2005-07-21 Thread Ryan Stille
I'm trying to get a handle on MySQL backups and hot backups using MyISAM
and InnoDB tables together.  We plan to switch from SQL Server to MySQL
soon.

How are you guys handling full-backups of databases with mixed MyISAM
and InnoDB tables?  From what I've read (and I've been reading a lot),
if we are using only one or the other then it is a pretty simple matter
to get a clean backup.  Use --lock-tables for MyISAM, or
--single-transaction if we using only InnoDB tables.

I've been doing some testing and came across something I don't
understand.  I filled my test InnoDB formatted table with a lot of data
so mysqldump will take a while to complete.  Then I start mysqldump on
this database with the --single-transaction option.  While that is
running, I insert a record into the table.  It completes sucessfully.  I
then run a query and am able to see that record in the database.  The
mysqldump is still running.  How is this record getting inserted into
the database?  I thought it was locked while the dump was happening?  I
thought it would get queued up and inserted when the mysqldump is
finished.  The record was NOT in the dump, this part made sense.

Thanks for any help.
-Ryan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Migrating Database

2005-05-08 Thread Ryan J. Cavicchioni
Brian Erickson wrote:

>Greetings all,
>
>We are migrating our web site from one server to another and are
>having some issues with transferring our MySQL database. Here's the
>process we're currently using:
>
>1) run mysqldump for each table in our database (i.e. one file for each table)
>2) compress each file using gzip
>3) transfer the files to our new server
>4) decompress
>5) import each table using:
>mysql [dbname] < [filename]
>
>Here's the problem: several of our tables have over 20 million rows,
>and this import process is taking well over 6 hours. Obviously, our
>site cannot go live until the database is fully imported, and this is
>much too long for us to be down.
>
>The two possible solutions I've researched are:
>
>1) Copy data files directly. This concerns me because of possible
>version incompatibilities.
>2) Using LOAD DATA commands. I'm not familiar with these at all, and
>frankly, not real sure how they work.
>
>Can anyone offer us some advice as to the easiest way we can
>accomplish this, whether it's one of the above solutions, or another
>one completely?
>
>Thanks in advance!
>
>  
>
If you could bring both database servers down, you could tar up the data
directory and untar it on the other server.

-- 
Ryan Cavicchioni

GPG ID: C271BCA8
GPG Public Key: http://confabulator.net/gpg/ryan.asc
GPG Fingerprint: 83E4 2495 6194 0F66 ED85 22B4 4CC0 DA01 C271 BCA8


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Bug or query problem?

2005-04-30 Thread Ryan A
Hi,
This is driving me nuts, please tell me is this a bug or a problem wiht my
query:

jappz_guestbook(owner_cno,  sent_datetime,  is_secret,  accepted)
jappz_member_profile(cno, pic_name)




jappz_guestbook's owner_cno and jappz_member_profile's cno are the same

what i need to do is select * from jappz_guestbook where owner_cno=x and
only select pic_name from jappz_member_profile where cno=jappz_guestbook.cno


This is my query:
SELECT
from_cno,from_usernam,sent_datetime,is_secret,accepted,jappz_member_profile.
pic1 FROM jappz_guestbook
INNER JOIN jappz_member_profile ON jappz_guestbook.owner_cno =
jappz_member_profile.cno
 WHERE jappz_guestbook.owner_cno = jappz_member_profile.cno order by
from_cno DESC limit 0,30

I am getting 4 results:
from_cno  from_usernam  sent_datetime is_secret  accepted
pic1
1 r 2005-05-01 00:03:14 0
0 s.jpg
1 r 2005-05-01 00:04:09 1
0 s.jpg
3 ryana32005-05-01 01:15:57 0 0
s.jpg
3 ryana3 2005-05-01 01:16:061 0
s.jpg


which is wrong because "pic1" for "from_cno" should be "r.jpg" but its
somehow "cacheing" the above.

Please advise.

Thanks,
Ryan A




-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.11.0 - Release Date: 4/29/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Help with query

2005-03-02 Thread Ryan Schefke
Hello,

 

I have two tables:

 

Table2 (listings), columns - listingID, state, preferred, siteAddress,
siteTitle, siteDescription

Table1 (invotes), columns listingID

 

I want to select the listingID of the rows in Table2 where the 'state'
column equals '$st' then count the number of rows in Table1 that have the
same listingID as those selected in Table2. I want to pull the information
from Table2 regardless if there is an entry in Table1 with the same
listingID or not. However, if the listingID is the same in Table1 compared
to Table2 then I want to count the number of rows that have that listing.
More importantly, I need to sort the rows from highest to lowest. 

 

I hope I made sense, this is what I have so far (which will output what I
want but NOT in order of votes ($invotes in my example)

 

 

$query_state = "SELECT
listingID FROM listings WHERE state='$st'"; 

$result_state  =
@mysql_query($query_state);

if
(mysql_num_rows($result_state) > 0) { // if at least one exists



while
($row_state = mysql_fetch_array($result_state, MYSQL_NUM)) {

 



 
$query_category = "SELECT count(*) as hits FROM 

 
invotes WHERE listingID = '$row_state[0]' ORDER BY hits DESC"; 





 
$result_category  = @mysql_query($query_category);





 
$row_category = mysql_fetch_array($result_category, MYSQL_NUM);

 


 


 
$invotes = $row_category[0];

 


 


 
// get listing information

 
$query_client = "SELECT preferred, siteAddress, siteTitle, siteDescription
FROM listings WHERE listingID='$row_state[0]'"; 

 
$result_client  = @mysql_query($query_client);

 
$row_client = mysql_fetch_array($result_client, MYSQL_NUM);

 

...then start printing our information from Table2

 

 

 

Best regards,

Ryan

 



Re: Equivalent syntax similiar to "dir /w /s" in Windows command prompt

2005-03-02 Thread Ryan Yagatich
Is 'find' what you're looking for?

find ${path} -type f -iname '*.doc' -print

find ${path} -type f -iname '*.doc' -exec /some/import/script {} \;


Thanks,
Ryan Yagatich
 ,______,
/ Ryan Yagatich Pantek Incorporated |
\  http://www.pantek.com/linux   [EMAIL PROTECTED] |
/  One Hour Emergency Response   (877) 546-8934 |
\___|

On Wed, 2 Mar 2005, Scott Hamm wrote:

>Is there any equivalent syntax similiar to dir /w /s for all *.doc to
>insert into mysql database?
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



query performance

2005-02-16 Thread Ryan McCullough
Can I post a query to this list and ask for help optimizing it?

-- 
Ryan McCullough
mailto:[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data in different tables or is one big table just as fast?

2005-02-15 Thread Ryan McCullough
how about purging rows older than a month? Do you need to keep them?
Archive them them to another database?

Actually, I got a better idea. Have your master db which is huge and
holds everything. Then on a seperate DB run a table for each feedid
with the last 100 feeds for that id.

Have a cron job that runs continually updating those tables with current data.

get it?


On Tue, 15 Feb 2005 23:02:38 +0100, Jacob Friis Larsen
<[EMAIL PROTECTED]> wrote:
> > >>>We have a table that grow by 200MB each day.
> > >>>Should we put data in different tables or is one big table just as fast?
> > >
> > > The table contains data from RSS and Atom feeds.
> > > Most users only need to see the newest items.
> > > A select could look like this: "SELECT title, desc FROM items WHERE
> > > feedid = 25 ORDER BY id DESC LIMIT 10"
> >
> > I would, however, be seriously concerned about diskspace if a table is 
> > adding
> > 200 MB a day with no archiving/compression/purges.
> 
> What if we use COMPRESS() for the text in old rows?
> 
> Jacob
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
Ryan McCullough
mailto:[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: queries slower on InnoDB

2005-02-14 Thread Ryan McCullough
what about other functions like doing a min() or max()? are those slow
limitations of innodb tables as well?


On Mon, 14 Feb 2005 15:52:21 -0600, Eric Bergen <[EMAIL PROTECTED]> wrote:
> Total row count is cached in the header for MyISAM tables. InnoDB has
> no such mechanism for this because transactions make it impossible to
> keep an exact row count.  In order for InnoDB to get a row count it
> has to do a full scan inside a transaction which will take a long
> time. One way around this is to maintain row count in another table.
> 
> For show table status it's better to only check the tables you need.
> Example: show table status like 'my_table'
> 
> I'm not aware of any plans to speed up either of these commands on InnoDB.
> 
> 
> On Mon, 14 Feb 2005 12:06:09 -0500, Zhe Wang <[EMAIL PROTECTED]> wrote:
> > Hi, there,
> >
> >   I am in the middle of replacing a MyISAM database with InnoDB.
> > Queries "show table status" and "select count" are extremely slow which
> > gave me some pain. I am wondering if there are any other queries on
> > InnoDB that are significantly slower than those on MyISAM other than
> > these two? And does MySQL have any plan to speed up this kind of queries?
> >
> >   Thank you very much in advance for your reply!
> >
> > Regards,
> > Zhe
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> 
> --
> Eric Bergen
> [EMAIL PROTECTED]
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 


-- 
Ryan McCullough
mailto:[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Subquery Oddity? Or where is the error I'm missing?

2005-01-31 Thread Ryan Sommers
While working tonight on some query work I came across the following 
situation. Bare with me as I build up all the pieces of the final two 
queries, then notice how the subqueried version fails, but the 
substituted version doesn't. What am I missing in here:

mysql> SELECT parent_path FROM category WHERE id=2;
+-+
| parent_path |
+-+
| 1   |
+-+
1 row in set (0.00 sec)
mysql> SELECT CONCAT((SELECT parent_path FROM category WHERE id=2), ",2");
+-+
| CONCAT((SELECT parent_path FROM category WHERE id=2), ",2") |
+-+
| 1,2 |
+-+
1 row in set (0.00 sec)
mysql> SELECT * FROM category WHERE parent_path=CONCAT((SELECT 
parent_path FROM category WHERE id=2), ",2");
Empty set (0.00 sec)

mysql> SELECT * FROM category WHERE parent_path="1,2";
++++--+---+-+--+
| id | name   | parent | children | leafs | parent_path | seq  |
++++--+---+-+--+
|  3 | Developers |  2 |0 | 0 | 1,2 |0 |
|  5 | Users  |  2 |0 | 0 | 1,2 |0 |
++++--+---+-+--+
2 rows in set (0.00 sec)
mysql> SELECT * FROM category WHERE parent_path LIKE CONCAT((SELECT 
parent_path FROM category WHERE id=2), ",2");
Empty set (0.00 sec)

mysql> SELECT VERSION();
+-+
| VERSION()   |
+-+
| 4.1.5-gamma |
+-+
1 row in set (0.00 sec)
--
Ryan Sommers
[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


  1   2   3   4   >