Re: Can't Connect Localhost

2013-09-01 Thread Howard Hart
Try

mysql -u root -h 127.0.0.1 -p

And if that doesn't work

mysql -u root -h PC IP address -p

On Sep 1, 2013, at 4:59 AM, John Smith cantinaderecuer...@yahoo.com wrote:

 __mysql_exceptions,OperationalError (2003 Can't connect to MySQL server on 
 'localhost' (10061))
 
 My question: How do I change from localhost to 124.0.0.1?
 TIA
 John
 
 
 On Sun, 1/9/13, Terry J Fundak te...@tjsoftworks.com wrote:
 
 Subject: Re: Can't Connect Localhost
 To: mysql@lists.mysql.com
 Cc: John Smith cantinaderecuer...@yahoo.com
 Date: Sunday, 1 September, 2013, 3:33 AM
 
 Hi John,
 
 Starting over….
 
 What is the error message?
 
 Terry
 
 ___
 Terry J Fundak
 Systems Engineer
 Network Design and Security Solutions for SMBs
 Tech Support - Client and Server Systems
 
 TJSoftworks
 1834 Chadwick Court
 Santa Rosa, CA 95401
 (707) 849-1000 Cell
 e-Mail: te...@tjsoftworks.com
 
 
 
 
 
 On Aug 31, 2013, at 3:26 PM, John Smith cantinaderecuer...@yahoo.com
 wrote:
 
 Hi;
 How do I change my connection from localhost to
 127.0.0.1 on a Win8 machine?
 TIA,
 John
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Looking for consultant

2012-07-18 Thread Howard Hart
You could write to an InnoDB frontend with master/master replication at 
each site, and slave off the local InnoDB server to your local cluster 
at each site.


Would make your writes limited by your InnoDB server performance and 
remote replication speed, but reads would run at cluster speeds and be a 
bit more bulletproof.


That could also potentially cover the foreign key constraints limitation 
in cluster since last I checked, it doesn't support these--may have 
changed recently--don't know. The foreign key constraint checks in this 
case would be covered by the InnoDB frontend prior to pushing to cluster.


Also looks like the latest MySQL cluster solution supports asynchronous 
binlog style replication per link below, so guess that's a possibility 
too now.


http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication.html


On 07/18/2012 04:45 PM, Rick James wrote:

Keep in mind that all cluster solutions are vulnerable to a single power 
failure, earthquake, flood, tornado, etc.

To protect from such, you need a hot backup located remotely from the live 
setup.  This introduces latency that will kill performance -- all cluster solutions depend on 
syncing, heartbeats, etc, that cannot afford long latencies.

You may choose to ignore that issue.  But, before going forward you need to 
make that decision.


-Original Message-
From: Antonis Kopsaftis [mailto:ak...@edu.teiath.gr]
Sent: Wednesday, July 18, 2012 9:09 AM
To: Carl Kabbe
Cc: mysql@lists.mysql.com
Subject: Re: Looking for consultant

Hello,

As far i can understand by your post, you need a high availability
mysql cluster with large capacity.
For having high availability you need something that can give you
multi-master replication between two or more mysql servers.

In my knowledge there are three solutions that can give you multi-
master
replication:

1. Official mysql cluster
It's an Enterprise class solution, very complicated, but 'it fully
multi-master. I was using one for about two year, but i dont recommend
it because (at least in my setup) it did not have very good
performance.
It's use it's own storage engine(NDB) which has a number of
limitations.

2. Tungsten replicator.
It 's relative new product. It support multi-master replication between
different type of databases, and it seems very promising. It's java
based. I haven't tested it but you can read a lot about on:
http://datacharmer.blogspot.com

3. Percona xtraDB cluster
It's also a relative new product. It's also support multi-master
replication, and it seems to have very good performance. The last 3
weeks i have installed a 3 node cluster of percona software and i'm
testing it. It seems to works ok, and after some optimization it has
better performance than my production mysql setup(simple primary-slave
replication) on same hardware (virtual machines). If i dont find any
serious problem till September i will use it for production.


Now,for you application to communicate with the two mysql master nodes
there several solutions:
1. Desing your app to use both mysql servers. With this solution you
can ever split writes in the one server, and reads in the other. It's
up to you to do whatever you want.

2. Setup a simple heartbeat solution and setup a floating virtual ip
between you mysql servers. If one of the mysql server( i mean the whole
OS) crash, the floating ip will be attached to the second server.

3. In each app server, install a tcp load balancer software like
haproxy and balance the mysql tcp connections between your app
servers and the mysql servers.

Regards,
akops


On 18/7/2012 6:11 μμ, Carl Kabbe wrote:

We are actually facing both capacity and availability issues at the

same time.

Our current primary server is a Dell T410 (single processor, 32 GB

memory) with a Dell T310 (single processor, 16GB memory) as backup.
Normally, the backup server is running as a slave to the primary server
and we manually switch it over when the primary server fails (which it
did last Saturday morning at 2:00AM.)  The switch over process takes
10-15 minutes although I am reducing that to about five minutes with
some scripting (the changeover is a little more complex than you might
think because we have a middle piece, also MySQL, that we use to
determine where the real data is.)  Until six months ago, the time
delay was not a problem because the customer processes could tolerate
such a delay.  However, we now have a couple of water parks using our
system at their gate, in their gift shops and in their concessions so
we need to now move the changeover time to a short enough period that
they really don't notice.  Hence, the need I have described as 'high
availability'.

The T410 is normally reasonably capable of processing our

transactions, i.e., the customers are comfortable with the latency.
However, we have been on the T310 since last Saturday and it is awful,
basically barely able to keep up and producing unacceptable latency.
Further, our load will double in the 

Re: Aborted clients

2012-06-12 Thread Howard Hart

On 06/12/2012 05:10 AM, Johan De Meersman wrote:

- Original Message -


From: Claudio Nanniclaudio.na...@gmail.com
 Print out warnings such as Aborted connection... to the error log.
the dots are not telling if they comprise Aborted clients as well.

Hah, how's that for selective blindness. Totally missed that :-)


I find the MySQL error log extremely poor, as far as I know it is one
of the MySQL features (like authentication) stuck to the dawn of
MySQL times.
Very hard to debug non basic things like your issue.
 From what I have experienced usually Aborted connection means wrong
credentials while Aborted clients means the client (typically PHP)
did not close the connection properly.

Yep, that's it; but indeed, since aborted clients aren't logged, then, I seem 
to be in a ditch.


Do you have any chance to check if the code is closing the
connections to the mysql database?

Oh, yes, millions upon billions of lines of wonderfully obscure Java stacktraces that 
reveal little more than Lost connection to database for every couple of 
thousand lines.

Everything works fine most of the time, then randomly some queries will get 
slow, and eventually the connections will drop. Rinse and repeat.

Oh well. Thanks for pointing out my reading error, I'm off to lart the devs 
into profiling their code to figure out *what* causes the slowness. Guess I'll 
have to set up some tcpdumps, too.

Watch out for this one, especially if the Aborted connections are all 
getting charged against a single client. Per the URL below and a 
misbehaving application not closing connections correctly, I've seen 
this spontaneously blacklist a client IP. Only way to unblacklist after 
is to run flush-hosts on the mysql server.


Also, didn't see a one-to-one correspondence between the global  
max_connect_errors setting and Aborted_connects (from show global status 
like '%abort%';), so hard to tell when you're approaching the per client 
blacklist limit.


http://dev.mysql.com/doc/refman/5.0/en/blocked-host.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: 5.1.51 Database Replica Slows Down Suddenly, Lags For Days, and Recovers Without Intervention

2011-10-23 Thread Howard Hart
One cause of heavy replication lag we noticed was due to a misbehaving 
application blasting updates (and commits) onto the master InnoDB tables from 
multiple clients. Since slave replication is single-threaded, it couldn't keep 
up I/O-wise, while the master seemed to show reasonably low load throughout. 

The temporary fix was to just set innodb_flush_log_at_trx_commit = 2 to only 
flush the log file to disk once every second. Result was the lag went from 
5,000 seconds behind and climbing to 0 in literally seconds, and the slave 
load dropped way below 1 again.

The catch (there's always one, of course) is if the server crashes, you could 
lose up to 1 seconds' worth of uncommitted transactions.

Howard

From: Claudio Nanni [claudio.na...@gmail.com]
Sent: Sunday, October 23, 2011 2:27 PM
To: Tyler Poland
Cc: mysql@lists.mysql.com
Subject: Re: 5.1.51 Database Replica Slows Down Suddenly, Lags For Days, and 
Recovers Without Intervention

Luis,

Very hard to tackle.
In my experience, excluding external(to mysql) bottlenecks, like hardware,
o.s. etc, 'suspects' are the shared resources 'guarded' by unique mutexes,
like on the query cache or key cache.
Since you do not use MySQL it cannot be the key cache. Since you use percona
the query cache is disabled by default.
You should go a bit lower level and catch the system calls with one of the
tools you surely know to see if there are waits on the semaphores.

I also would like to tell that the 'seconds behind master' reported by the
slave is not reliable.

Good luck!

Claudio

2011/10/23 Tyler Poland tpol...@engineyard.com

 Luis,

 How large is your database?  Have you checked for an increase in write
 activity on the master leading up to this? Are you running a backup against
 the replica?

 Thank you,
 Tyler

 Sent from my Droid Bionic
 On Oct 23, 2011 5:40 AM, Luis Motta Campos luismottacam...@yahoo.co.uk
 wrote:

  Fellow DBAs and MySQL Users
 
  [apologies for eventual duplicates - I've posted this to
  percona-discuss...@googlegroups.com also]
 
  I've been hunting an issue with my database cluster for several months
 now
  without much success. Maybe I'm overlooking something here.
 
  I've been observing the database slowing down and lagging behind for
  thousands of seconds (sometimes over the course of several days) even
  without any query load besides replication itself.
 
  I am running Percona MySQL 5.1.51 (InnoDB plug-in version 1.12) on Dell
  R710 (6 x 3.5 inch 15K RPM disks in RAID10; 24GB RAM; 2x Quad-core Intel
  processors) running Debian Lenny. MySQL data, binary logs, relay logs,
  innodb log files are on separated partitions from each other, on a RAID
  system separated from the operating system disks.
 
  Default Storage Engine is InnoDB, and the usual InnoDB memory structures
  are stable and look healthy.
 
  I have about 500 (read) queries per second on average, and about 10% of
  this as writes on the master.
 
  I've been observing something that looks like between 6 and 10 pending
  reads per second uniformly on my cacti graphs.
 
  The issue is characterized by the server suddenly slowing down writes
  without any previous warning or change, and lagging behind for several
  thousand seconds (triggering all sorts of alerts on my monitoring
 system). I
  don't observe extra CPU activity, just a reduced disk access ratio (from
  about 5-6MB/s to 500KB/s) and replication lagging. I could correlate it
  neither InnoDB hashing activity, nor with long-running-queries, nor with
  background read/write thread activities.
 
  I don't have any clues of what is causing this behavior, and I'm unable
 to
  reproduce it under controlled conditions. I've observed the issue both on
  severs with and without workload (apart from the usual replication load).
 I
  am sure no changes were applied to the server or to the cluster.
 
  I'm looking forward for suggestions and theories on the issue - all ideas
  are welcome.
  Thank you for your time and attention,
  Kind regards,
  --
  Luis Motta Campos
  is a DBA, Foodie, and Photographer
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/mysql?unsub=tpol...@engineyard.com
 
 




--
Claudio

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Mysql and Flashback

2008-11-25 Thread Howard Hart

slave lag should be easy to do with a simple bash script. i.e. -

desired_delay=3600  # one hour lag

while sleep 60
do
 behind=`mysql -u root --password=foobar -e show slave status\G | 
grep 'Seconds_Behind_Master:' | awk '{ printf %s\n, $2 }'`

 if [ $behind  $desired_delay ]; then
 mysql -u root --password=foobar -e slave stop
 holdtime=`expr $desired_delay  -  $behind`
 if [ $holdtime -gt 0]; then
 mysql -u root --password=foobar -e slave stop
 fi
 else
 mysql -u root --password=foobar -e slave start
 fi
done


Not pretty, but should do the job with more sanity checks, with the 
caveat that sometimes Seconds_Behind_Master can return some interesting 
values


Howard Hart
Ooma, Inc.

ewen fortune wrote:

Hi Shain,

If you are using InnoDB its possible to patch to allow this functionality.

Percona are in the early stages of developing a patch specifically to
allow flashback type access to previous table states.

https://bugs.launchpad.net/percona-patches/+bug/301925

If you wanted to go down the slave lag road, Maatkit has a tool for doing that.

http://www.maatkit.org/doc/mk-slave-delay.html

Cheers,

Ewen

On Tue, Nov 25, 2008 at 6:57 PM, Shain Miley [EMAIL PROTECTED] wrote:
  

Hello,
We are planning on trying to do an Oracle to MySQL migration in the near
future.  The issue of a Mysql equivalent to Oracle's flashback was being
discussed.  After some digging  it appears that there is no such feature in
Mysql. One thought that I had was to do some intentional replication lag
(say 12 to 24 hours)...that way if we needed to revert back we would have
the option of doing so.

Does anyone:

a: know how to setup a replication to intentionally lag?

b: know of a better way of engineering a flashback equivalent for Mysql?

Thanks in advance,

Shain

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]