Re: Can't Connect Localhost

2013-09-01 Thread John Smith

On Sun, 1/9/13, Reindl Harald  wrote:

 Subject: Re: Can't Connect Localhost
 To: mysql@lists.mysql.com
 Date: Sunday, 1 September, 2013, 2:44 PM
 
> > I looked in mysql.config.pl and no "localhost" :(
 
> "mysql.config.pl" from what software damned?
> "mysql.config.pl" does not exist in context of mysql

It exists in the following folder on my Win8 box:

/Program Files (x86)/MySQL/MySQL Server 5.5/bin

Yes, apparently it does exist in context of mysql. At any rate, it didn't work. 
Is there another config file in there?

> *what client software you are using to connect to a
> mysql-server*

I had written a python script. However, even if I try your and the other 
fellow's suggestion and open up a command prompt with admin privileges and type
mysql -u root -h 127.0.0.1 -p

it just opens up a mysql session, and that only if I do it from the folder 
where that script lives since it's not in the path. Otherwise, it complains it 
can't find the script.
TIA,
John 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't Connect Localhost

2013-09-01 Thread John Smith

On Sun, 1/9/13, Reindl Harald  wrote:

 Subject: Re: Can't Connect Localhost
 To: mysql@lists.mysql.com
 Date: Sunday, 1 September, 2013, 1:34 PM
 
 where you can change this? typically
 in the config file
 of teh software without crystal balls we don't know
 
 what about tell us what software you are speaking about?

Well, I'm using MySQL ;) In my first post, I stated version 5.5 and on Win8/64. 
I looked in mysql.config.pl and no "localhost" :(
John

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Can't Connect Localhost

2013-09-01 Thread John Smith
__mysql_exceptions,OperationalError (2003 "Can't connect to MySQL server on 
'localhost' (10061)")

My question: How do I change from localhost to 124.0.0.1?
TIA
John


On Sun, 1/9/13, Terry J Fundak  wrote:

 Subject: Re: Can't Connect Localhost
 To: mysql@lists.mysql.com
 Cc: "John Smith" 
 Date: Sunday, 1 September, 2013, 3:33 AM
 
 Hi John,
 
 Starting over….
 
 What is the "error message"?
 
 Terry
 
 ___
 Terry J Fundak
 Systems Engineer
 Network Design and Security Solutions for SMBs
 Tech Support - Client and Server Systems
 
 TJSoftworks
 1834 Chadwick Court
 Santa Rosa, CA 95401
 (707) 849-1000 Cell
 e-Mail: te...@tjsoftworks.com
 
 
 
 
 
 On Aug 31, 2013, at 3:26 PM, John Smith 
 wrote:
 
 > Hi;
 > How do I change my connection from localhost to
 127.0.0.1 on a Win8 machine?
 > TIA,
 > John
 > 
 > --
 > MySQL General Mailing List
 > For list archives: http://lists.mysql.com/mysql
 > To unsubscribe:    http://lists.mysql.com/mysql
 > 
 > 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Can't Connect Localhost

2013-08-31 Thread John Smith
Hi;
How do I change my connection from localhost to 127.0.0.1 on a Win8 machine?
TIA,
John

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Normalization vs. Performance

2008-08-26 Thread John Smith
Hi,

here my table which stores multiple trees with nested sets:

CREATE TABLE `posts` (
  `posting_id` int(11) unsigned NOT NULL auto_increment,
  `root_id` int(11) unsigned NOT NULL,
  `lft` int(11) unsigned NOT NULL,
  `rgt` int(11) unsigned NOT NULL,
  `subject` varchar(400) collate latin1_german1_ci NOT NULL,
  `posting_date` datetime NOT NULL,
  -- some additional fields
  PRIMARY KEY  (`posting_id`),
  UNIQUE KEY `id_und_lft` (`posting_id`,`lft`),
  KEY `root_id` (`root_id`),
  KEY `username` (`username`),
  KEY `root-id_und_lft` (`root_id`,`lft`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci;

I can select all trees with the following SELECT:

SELECT a.posting_id, a.root_id, a.username, a.subject, count(*) AS level
FROM posts AS a
JOIN posts AS b ON (a.lft BETWEEN b.lft AND b.rgt)
AND (b.root_id = a.root_id)
GROUP BY a.posting_id
ORDER BY a.root_id,a.lft

Works fine, uses indexes nicely.

Now I want to display those trees in reverse chronological order.
(The date of a tree is determined by the `posting_date` field of its root)
That's not a big problem since the joining is already done correctly.

SELECT a.posting_id, a.root_id, a.username, a.subject, count(*) AS level
FROM posts AS a
JOIN posts AS b ON (a.lft BETWEEN b.lft AND b.rgt)
AND (b.root_id = a.root_id)
GROUP BY a.posting_id
ORDER BY b.posting_date DESC, a.root_id,a.lft

But here's the problem. Since the results are now ordered by fields from
more than one table, indexes no longer work and filesort is used.

So how bad is this? The mentioned query will be the query which is used
the most in my application (yes, it is going to be a forum).
Should I break normalization and save the date of the root in each node row?

Greets, John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



SELECT [n] nested sets

2008-08-12 Thread John Smith
Hi,

there is a table with several trees in nested set form in it.

CREATE TABLE IF NOT EXISTS `posts` (
  `id` int(11) unsigned NOT NULL auto_increment,
  `root_id` int(11) unsigned NOT NULL,
  `lft` int(11) unsigned NOT NULL,
  `rgt` int(11) unsigned NOT NULL,
  `date` datetime NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `root_id` (`root_id`)
) ENGINE=InnoDB  DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci
AUTO_INCREMENT=21 ;

I can select all trees and order them by the date-field of their root.

SELECT n.*,
count(*)+(n.lft>1) AS level
FROM posts  n, posts p
WHERE n.lft BETWEEN p.lft AND p.rgt
AND (p.root_id = n.root_id)
AND (p.id != n.id OR n.lft = 1)
GROUP BY n.id
ORDER BY p.date DESC,n.root_id,n.lft

Now, i want to select only the 5 most recent trees (the date of a tree is
determined by the date of its root). How do I do this elegantly?

I could simply select the top 200 rows and truncate the last selected tree
(because it will most likely no be completely selected). But then I would
not have control over the number of trees (that would not be a big
problem, users will learn to live with that). But the big Problem with
this solution is, that there is no way to select the next 5/[n] most
recent (for the next page for example), since I don't know where the first
200 rows were truncated / in other words, I don't know where the last tree
is which was not displayed on the previous page.

Another way would be a subquery which only selects the ids of the most
recent five roots. That is clean, works great but lacks performance
(especially in mysql).

Is there another way?

Greets, John Smith


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Locking certain rows in a transaction

2008-08-11 Thread John Smith
Perrin Harkins wrote:
 > Assuming you're using InnoDB tables, "SELECT...FOR UPDATE" will lock
> the rows as you describe.  It can prevent other inserts and updates to
> neighboring rows as well, depending on what isolation level you're
> running (default is REPEATABLE READ).

Thanks, in fact it even does more than I expected. When another client
tries to read a row previously selected by another client FOR UPDATE, he
will read just fine. If he tries to select it with 'FOR UPDATE', he will
have to wait.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Locking certain rows in a transaction

2008-08-09 Thread John Smith
Hi,

i'm currently experimenting with nested sets. To insert a new node,, I
need 1 SELECT, 2 UPDATE and 1 INSERT statement.
Of course all of this wii be packed into a transaction, because the table
could get corrupted if not all of the mentioned queries are executed.

Now here's the question: I need to lock certain rows, so that no other
client can read or write that rows (I want those clients to wait until the
transaction is commited or rolled back). I don't want to lock the entire
table, since that would block to many clients, which never intended to
read the critical rows.

Please don't simply post a link to the documentation. I tried to read it,
but I didn't find the solution.

Example would be nice.

Greets, peter


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fulltext insert/update

2004-11-15 Thread John Smith
I know that having a fulltext index on a table slows inserts and updates
down (or I think I do...might have miss read something)

I have a large table that uses fulltext which I plan to update nearly
90% of the rows (400,00+) and add new ones.

It currently takes around 35 minutes to remove the index and then
another 35 minutes to re index it.

I don't need to use the index until after I have finised with the
updateing and run myisamchk -rq --sort-index --analyze --sort-records=7
on the .MYI file.

Is it worth over an hour of adding and removing the fulltext?

Its just that my code takes 4 hours to finish and I am reluctant to add
on another hour!

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 14:59, Victor Pendleton wrote:
> If you build the composit indexes as suggested, does your performance 
> improve?

Erm, do you think it would? Its just that with such a large table and it
being compressed it takes ages?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 14:36, Philippe Poelvoorde wrote:
> Hi,
> 
> could you try adding a key with
> ALTER TABLE properties ADD INDEX(countyid,old,price);
> It could maybe help getting less rows at a time.

I dropped the old and price for the where clause and the number of rows
scanned were the same as without the limit which is good so I am
guessing the 3 coloum index will do the same.

Problem is that it still took 30+ seconds. To return 9000 rows.

Not an acceptable performace really.

Any other ideas?

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Queries taking 60 seconds+

2004-11-11 Thread John Smith
On Thu, 2004-11-11 at 13:58, Victor Pendleton wrote:
> What does the explain plan look like?
> 

id   select_type   table   type   possible_keys   key   key_len   ref   rows   
Extra  
1 SIMPLE properties ref old,price,countyid countyid 3 const 9233 Using where; 
Using filesort

The filesort I know is a problem but I thought I read sorting it using
myisamchk by the price key helps this.

Also if I remove ORDER BY and the price >=1 AND old = 0 the query still
takes 46 seconds which is too long for a dynamic site that I am trying
to build.

John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Queries taking 60 seconds+

2004-11-11 Thread John Smith
Afternoon All,

I have the following table structure:

CREATE TABLE properties (
  id int(11) NOT NULL auto_increment,
  propid varchar(14) NOT NULL default '0',
  townid varchar(255) NOT NULL default '',
  countyid mediumint(5) NOT NULL default '0',
  address text NOT NULL,
  price int(14) NOT NULL default '0',
  image text NOT NULL,
  description text NOT NULL,
  link text NOT NULL,
  underoffer tinyint(1) NOT NULL default '0',
  sold tinyint(1) NOT NULL default '0',
  added int(14) NOT NULL default '0',
  `new` tinyint(1) NOT NULL default '1',
  old tinyint(1) NOT NULL default '0',
  PRIMARY KEY  (id),
  KEY old (old),
  KEY `new` (`new`),
  KEY sold (sold),
  KEY underoffer (underoffer),
  KEY propid (propid),
  KEY price (price),
  KEY countyid (countyid),
  FULLTEXT KEY address (address)
) ENGINE=MyISAM

Which I have ran the following commands on:

myisamchk -rq --sort-index --analyze --sort-records=7 properties.MYI
myisampack properties.MYI
myisamchk -rq --sort-index --analyze properties.MYI

It contains just over 400,000 rows and compressed is 163 Meg in size. 

I have just upgraded to 4.1 as well to see if I can squeeze any more 
performance out.

This query:

SELECT SQL_CACHE SQL_CALC_FOUND_ROWS address,price, image, description,
link , underoffer, sold ,added ,new  FROM properties  WHERE  countyid =
44 AND price >= 1 AND old=0   ORDER  BY price desc LIMIT 100, 10;

Takes 123 seconds and examins 19068 rows according to the query log,
without the limit it returns 9512 rows.

I am all resourced out and have spent weeks googleing and reading the
docs etc.

I am pretty sure returing all properties containing the countyid should
not take 2 minutes and I am confused at why its examining 19068 rows
also.

It could be down to my server I suppose which is a "jailed" (no idea
what that is its just what my hosting company call it) freebsd virtial
server.

I have been playing with mysqld variables in my.cnf also...these are
probably all set too high...i am not really sure of the specs of my
server as I don't get that info..

set-variable = query_cache_type=2
set-variable= key_buffer=300M
set-variable= query_cache_size=200M
set-variable= query_cache_limit=50M
set-variable= max_allowed_packet=10M
set-variable= table_cache=50
set-variable= sort_buffer=5M
set-variable= read_rnd_buffer=5M
set-variable= record_buffer=5M
set-variable= tmp_table_size=64M
set-variable= thread_cache=9
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=2
set-variable= ft_min_word_len=3
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log

The table is read only for most of the day and will get updated (once I
get the queries down to an acceptable level) nightly...if that helps.

Any suggestions or further tips would be very helpful as its taken me
months to get my code to input the data in the way I need its now trying
to get the data back out that is the problem.

Thanks
John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Out of my depth.

2004-11-03 Thread John Smith
Right, I upgraded my version of MySQL last night after a lot of digging about, found a 
few bugs to do with table coruption that possibly could have been the problem.

I am re-runing my script and its seems to be going fine this time.

I am now using 4.0.28, I was using an early version of 4 can't remeber the exact 
version but it contained "gamma" in the version if that tracks it down.

I actually changed my script to convert the table to InnoDB, do the 
inserts/selects/updates, convert back to MyISAM (As i need fulltext) then run 
myisamchk, it took about 3-4 hours longer than a pure MyISAM version.

Hohum,

John 

___
Have your own email and web address for life.

http://www.homemaster.net - Homemaster. Come Together. Online.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Out of my depth.

2004-11-02 Thread John Smith
Hi All,

I have built a search application in php/libcurl and I store its results in MySQL.

The problem is that I am not used to dealing with the sizes of tables my search 
application produces, roughly around 400,000 rows in a table the last time I got ran 
it correctly.

Right to my problem.

I fork 20 versions of my spider at a time, these forks all have a connection to the 
database and do selects, inserts and updates on the same table at once. When its going 
at full steam I can be inserts 1000s of rows a minute.

I am using MyISAM as I need its fulltext search cabablities. I remove the fulltext 
index before I start any changes to the table.

I am finding that my php script is not being able to select from the database at 
random points, I have tracked this down to a 127 erros, the table it corrupt.

Before I start my spiders (before it forks) I run myisamck -r on my .MYI file but it 
corrupts during the scripts execution time and this means it is no longer able to 
select from the DB (Curcial to know if its needing updated or inserted as a new record)

Any hints, any more information needed from me etc would be great.

My table struture is:

CREATE TABLE thetable (
  id int(11) NOT NULL auto_increment,
  sid int(11) NOT NULL default '1',
  pid varchar(14) NOT NULL default '0',
  tid varchar(255) NOT NULL default '',
  cid varchar(255) NOT NULL default '',
  location text NOT NULL,
  number int(14) NOT NULL default '0',
  image text NOT NULL,
  description text NOT NULL,
  link text NOT NULL,
  uo tinyint(1) NOT NULL default '0',
  sd tinyint(1) NOT NULL default '0',
  added int(14) NOT NULL default '0',
  new tinyint(4) NOT NULL default '1',
  old tinyint(4) NOT NULL default '0',
  PRIMARY KEY  (id),
  KEY sid (sid),
  KEY old (old),
  KEY new (new),
  KEY sd (sd),
  KEY uo (uo),
  KEY pid (pid),
  KEY tid (tid),
  KEY cid (cid)
)

Ta,
John

___
Have your own email and web address for life.

http://www.homemaster.net - Homemaster. Come Together. Online.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Innodb error

2003-06-18 Thread John Smith
I'm getting this error in my log:

--
030618 15:08:24  mysqld started
030618 15:08:24  InnoDB: Started
030618 15:08:24  /usr/sbin/mysqld: Can't create/write
to file '/var/run/mysqld/mysqld.pid' (Errcode: 2)
/usr/sbin/mysqld: ready for connections.
Version: '4.0.12'  socket: '/var/lib/mysql/mysql.sock'
 port: 3306
030618 15:08:26  InnoDB: Assertion failure in thread
28680 in file page0page.c line 450
InnoDB: We intentionally generate a memory trap.
InnoDB: Send a detailed bug report to
[EMAIL PROTECTED]
mysqld got signal 11;
This could be because you hit a bug. It is also
possible that this binary
or one of the libraries it was linked against is
corrupt, improperly built,
or misconfigured. This error can also be caused by
malfunctioning hardware.
We will try our best to scrape up some info that will
hopefully help diagnose
the problem, but since we have already crashed,
something is definitely wrong
and this may fail.

key_buffer_size=67108864
read_buffer_size=1044480
030618 15:08:26  mysqld ended
--

Prior to this, I was doing 'mysqldump -A ...' and
mysql crashed. After rebooting the server I can get
mysql running again, but most queries show 'connection
lost', 'mysql gone away', etc. errors - but usually
the queries work after the client automatically
reconnects. 'mysqlcheck [-r] ...' dies with
'connection lost'.

Please help!

TIA,
John

__
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql-3.23.56 and RH 7.3 - Table 'mysql.host' doesn't exist

2003-06-15 Thread John Smith
Hi!

I have redhat 7.3 with the mysql packages that came with it.
Since RHSA-2003:093-14 came out I upgraded mysql with
mysql-3.23.56-1.73.i386.rpm
mysql-devel-3.23.56-1.73.i386.rpm
mysql-server-3.23.56-1.73.i386.rpm
When I try to start mysqld I get the following error message
in the log:
Table 'mysql.host' doesn't exist.

After googling around a bit and looking at the mysql and
redhat lists I found that this is well-known problem so I
didn't panic and indeed found several useful suggestions:
1. The path to the basedir and/or datadir are not set
properly but I did set it properly in /etc/my.cnf (the
original that came with rh7.3 and the update were incorrect).
2. The privilages and/or ownership of the basedir and/or
datadir are not set properly but I also did that, even
setting it to 777 and ownership given to user mysql. I
(would :)) use user mysql to run mysqld.
3. Mysqld is complaining about innodb but since I don't need
it I give the option --skip-innodb to mysqld and this sorts
all innodb issues out.
After all this the bloody error message is still there and
mysqld wouldn't start!
Anyone has an idea?

A related question: I thought about installing the RPM from
mysql.com instead of the one provided by redhat. Since the
RPM from redhat doesn't seem to be of the same nature as the
one from mysql.com I'm not sure if I can simply do rpm -Fvh
 on top of the package
from redhat. Do you have any comments on this?
I would be very grateful for any help, thanks,

Daniel

Some (hopefully) useful info:
glibc-2.2.5-34
kernel-2.4.20-18
arch: i686
_
Add photos to your e-mail with MSN 8. Get 2 months FREE*. 
http://join.msn.com/?page=features/featuredemail

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


field separator

2001-04-04 Thread John Smith

I've searched many places and can't get a good answer .. i hope I can get 
help here.

I'm trying to dump all the data in my database to separate files (per table) 
and be able to specify a field separator character.

for example, I'd like to use || as my field separator, I did the following:

>mysqldump -u root -p --tab --fields-enclosed-by='||' --no-create-info  myDB 
> > output_file.txt

I get the folowing error:
mysqldump: Got error: 1: Can't create/write to file 
'/home/website/--fields-enclosed-by=||/alloc.txt' (Errcode: 2)

thanks in advance
-john k. smith
_
Get your FREE download of MSN Explorer at http://explorer.msn.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




field separator

2001-04-04 Thread John Smith

I've searched manyh places and can't get a good answer .. i hope I can get 
help here.

I'm trying to dump all the data in my database to separate files (per table) 
and be able to specify a field separator character.

for example, I'd like to use || as my field separator, I did the following:

>mysqldump -u root -p --tab --fields-enclosed-by='||' --no-create-info  myDB 
> > output_file.txt

I get the folowing error:
mysqldump: Got error: 1: Can't create/write to file 
'/home/website/--fields-enclosed-by=||/alloc.txt' (Errcode: 2)

thanks in advance
-john k. smith
_
Get your FREE download of MSN Explorer at http://explorer.msn.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php