Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Sebastian Mendel

Ian schrieb:

Hi,

I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
/tmp/mysql.sock keeps on disappearing and we are forced to kill -9 mysql and
restart it causing db corruptions as there is no other way of telling it to
stop once that file has gone. I have tried to find any reason why this
happens and there are no errors, no core files, nothing - the file just
disappears.


why do you need to kill if the socket is missing?

does $/etc/init.d/mysqld stop|restart not work without a socket?

--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Ian
Hi,

I wouldnt have thought so but whenever that file is missing and try the
stop/restart it just sits at the waiting for pids part of the stop loop
(where it lists the pids), and it never stops the server - i have left it
for over an hour and it never stops - just keeps on in the stop loop.

Cheers
Ian

On Fri, Feb 29, 2008 at 12:43 PM, Sebastian Mendel [EMAIL PROTECTED]
wrote:

 Ian schrieb:
  Hi,
 
  I am running mysql 5.0.45 on freebsd 4.11 and for some strange reason
  /tmp/mysql.sock keeps on disappearing and we are forced to kill -9 mysql
 and
  restart it causing db corruptions as there is no other way of telling it
 to
  stop once that file has gone. I have tried to find any reason why this
  happens and there are no errors, no core files, nothing - the file just
  disappears.

 why do you need to kill if the socket is missing?

 does $/etc/init.d/mysqld stop|restart not work without a socket?

 --
 Sebastian

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Ian
Hi,

I dont know about network, but I know using php we cant access anything in
MySQL once the file is gone. When it happens again, ill be sure to check
network.

Cheers
Ian

On Fri, Feb 29, 2008 at 12:54 PM, Sebastian Mendel [EMAIL PROTECTED]
wrote:

 Ian schrieb:
  Hi,
 
  I wouldnt have thought so but whenever that file is missing and try the
  stop/restart it just sits at the waiting for pids part of the stop loop
  (where it lists the pids), and it never stops the server - i have left
 it
  for over an hour and it never stops - just keeps on in the stop loop.

 does MySQL still respond on network connections when the socket is 'gone'?

 --
 Sebastian

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Re: /tmp/mysql.sock dissapears

2008-02-29 Thread Sebastian Mendel

Ian schrieb:

Hi,

I wouldnt have thought so but whenever that file is missing and try the
stop/restart it just sits at the waiting for pids part of the stop loop
(where it lists the pids), and it never stops the server - i have left it
for over an hour and it never stops - just keeps on in the stop loop.


does MySQL still respond on network connections when the socket is 'gone'?

--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Debugging mysql limits

2008-02-29 Thread Phil
Just a little more info on this.

I tried setting all of this up on a home server with, as far as I can see,
more or less identical specs with the exception being that it's a 64bit
linux build rather than 32bit.

Same insert on duplicate update takes 3 mins.

I spent all day yesterday trying to figure out what limits are being hit
without success.

Would certainly appreciate any pointers to look at..

Phil

On Thu, Feb 28, 2008 at 11:19 AM, Phil [EMAIL PROTECTED] wrote:

 I'm trying to figure out which limits I'm hitting on some inserts.

 I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
 daily refresh with updated (and sometimes new) data.

 I insert the data into a temporary table using LOAD DATA INFILE. This
 works great and is very fast.

 Then I do an

 INSERT INTO A_USER (Select col1,col2,col3...,col 20, 0,0,0,0,0,0,etc etc
 from A_TEMP) on DUPLICATE KEY UPDATE col1=A_TEMP.col1,col2= etc

 The sizes in the tables range from 500 entries up to 750,000.

 two of them in the 200,000 range take 2-3 mins for this to complete, the
 largest at 750,000 takes over an hour.

 a sampling of my cnf file is

 old_passwords=1
 max_connections = 50
 max_user_connections = 50
 table_cache=2000
 open_files_limit=4000
 log-slow-queries = /var/log/mysql-slow.log
 long_query_time = 12
 log-queries-not-using-indexes
 thread_cache_size = 100
 query_cache_size = 64M
 key_buffer_size = 512M
 join_buffer_size = 24M
 sort_buffer_size = 64M
 read_buffer_size = 4M
 tmp_table_size = 64M
 max_heap_table_size = 64M

 There is 2Gb Ram in the server which I would gladly increase if I knew I
 could tweak these settings to fix this?

 Any ideas what I should do to figure out what is causing it?

 Regards

 Phil




Re: User Preferences?

2008-02-29 Thread Curtis Maurand



I think that I'd set up a varchar column and store a tab separated
list in it.  

Then parse it  upon retrieval.

Curtis


Dan Buettner wrote:
 Waynn, I've used
both schemes 1 and 2 as you describe, and in my
 experience
 2 is the best way to go.  It's easy to scale up as you add users
and
 settings, and it's easy to make changes if the meaning of
settings should
 change (i.e. you need to do a backend change to
people's settings).
 
 #1 is harder to make those kind
of back end updates on, and harder for
 someone troubleshooting
to make sense of the data.
 
 #3 may not scale well -
you would end up having to track too many tables,
 I

think.
 
 What I'm doing in my current project is using
a data model that has a
 method
 for each preference
setting, and returns a sensible value by default if
 the
 user has no pref set for a given lookup key; otherwise, I return
what the
 user has set.  This means adding a method every time I
add a preference
 setting, which on the one hand means adding
code - on the other hand,
 chances are very high that if I am
adding the ability for a user to set a
 preference, I'm already
adding code somewhere to ensure that preference
 has
 an
effect.
 
 HTH,
 Dan
 
 
 
 On Thu, Feb 28, 2008 at 9:50 AM, Waynn Lue
[EMAIL PROTECTED] wrote:
 
 I'm looking for
a good way to store user preferences.  The most

straightforward way is just to add a column to the Users table for
 each preference we're looking to store.  Downside is that it
requires
 an ALTER TABLE which gets prohibitively expensive
as it gets larger,
 as it's fairly inflexible.  I've come up
with a few alternatives, and
 I'm wondering if people have
ideas or suggestions, as this has to be a
 common problem.  A
quick Google search didn't turn up anything.


1.  Store the preferences as a binary blob on the Users table.  This
 blob could be either a blob, or an integer that I use
application
 logic to read/write from, or I could use the SET
datatype.
 2.  Store the preferences in normalized form, with
a new table called
 UserPreferences that has UserId, Setting,
Preference and we add a row
 for each setting of that
user.
 3.  Create a separate table each time we want to add a
new setting,
 UserId, WhateverTheNameOfThePreferenceIs.

 Anyone have any experience with this, or better
suggestions?

 Thanks,
 Waynn

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql

To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]


 
 


-- 
Curtis
Maurand
Head Honcho
Xyonet Hosting Services
Biddeford, ME
04005
mailto:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]


Re: Odd Update Question.

2008-02-29 Thread m i l e s
Folks,
Thanks for the 'help'.  Oy.

I figured it out from some help on the Lasso discussion list.

All I had to do was properly address ALL the tables I wanted to touch.

So this:

UPDATE tbe_gallery
SET tbe_gsa.gsa_paperprice = tbe_gallery.gallery_gsaprice_paper
WHERE tbe_gallery.gallery_id = tbe_images.img_rel_id AND
  tbe_images.img_orig_filename = tbe_gsa.gsa_id

Should have been This:

UPDATE tbe_gallery, tbe_gsa, tbe_images
SET tbe_gsa.gsa_paperprice = tbe_gallery.gallery_gsaprice_paper
WHERE tbe_gallery.gallery_id = tbe_images.img_rel_id AND
  tbe_images.img_orig_filename = tbe_gsa.gsa_id

And viola, it works!

Miles.


Re: LIKE problem with characters 'å' (norwe gian) and 'a' (mysql bug?)

2008-02-29 Thread Kent Larsson
 I get incorrect result when searching for the norwegian character 'å' 
 using LIKE. I get rows with 'a' in it, and visa versa if I search for 
 'a', I get results which has 'å' in it in addition to the ones with 'a'.

Make sure that your table has:
  charset=utf8
  collation=utf8_norwegian_ci
And that every column ALSO has:
  charset=utf8
  collation=utf8_norwegian_ci

Notice that I am making 'utf8_norwegian_ci' up. I looked for it using my MySQL 
Query Browser but couldn't find it. As I'm from Sweden I've had similar 
problems (åäöÅÄÖ matched åaäÅÄAÖO) and setting as above but using (the 
existing) 'utf8_swedish_ci' worked in my case.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimizing a query

2008-02-29 Thread Chris W
I was wondering if someone could tell me what things I might need to do 
to make this query as fast as possible.


I am developing a web site where users will have access to certain 
things based on what groups they are in and what groups have access to 
certain things.  There are several different types things they have 
access to based on the group but for this discussion lets limit it to 
pages.  For a user to view a page they have to be associated with one or 
more of the groups that the page is linked to.  

Since the relation ship between pages to groups and users to groups is 
many to many I have a table just for that relationship.  So here are my 
tables

Group:
 contains various info about the group with key field GroupID.
User:
 contains various info about the user along with the key field UserID.
Page:
 contains various info about a page on the site along with it's PageID.

GroupLink:
CREATE TABLE  `grouplink` (
 `LinkType` set('user','page','template','templatefile','menu') NOT 
NULL default '',

 `ID` int(10) unsigned NOT NULL default '0',
 `GroupID` int(10) unsigned NOT NULL default '0',
 PRIMARY KEY  (`LinkType`,`ID`,`GroupID`)
) ;

Since there are several things that will be linked to groups I decided 
to use one table to create all links and the The LinkType field to 
designate which think we are linking to a group.


For example suppose I had page 18 linked to group 2, 5, 6, and 7 and I 
had User 23 linked to group 1, 2, and 9.  The rows in the table would be 
like this


group, 18, 2
group, 18, 5
group, 18, 6
group, 18, 7
user, 23, 1
user, 23, 2
user, 23, 9

Now I want to know if user 23 can access page 18 so I execute this query

SELECT COUNT(`GroupID`)
FROM `grouplink` u
JOIN `grouplink` p USING(`GroupID`)
WHERE u.`LinkType` = 'user' AND p.`LinkType` = 'page'
AND u.`ID` = '23' AND p.`ID` = '18'

Since User 23 and Page 18 are both linked to group 2, COUNT(`GroupID`) 
should return 1.
The way the rules I have set work, if the count is 1 or larger then that 
user has access to the page.


Now the question is there anything I can do to make this query faster?

--
Chris W
KE5GIX

Protect your digital freedom and privacy, eliminate DRM, 
learn more at http://www.defectivebydesign.org/what_is_drm;


Ham Radio Repeater Database.
http://hrrdb.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing a query

2008-02-29 Thread Dan Buettner
Chris, this should already be pretty fast as it is using a primary key in
its entirety, and as long as the index size remains manageable MySQL will be
able to keep it in memory for fast access.

That said, doing away with the aggregate function might speed things up just
slightly.

You don't care how many matches there are, or which match provided access
(right?) - you just care whether there is or is not a match.

So, perhaps you could do this instead:

SELECT GroupID
FROM `grouplink` u
JOIN `grouplink` p USING(`GroupID`)
WHERE u.`LinkType` = 'user' AND p.`LinkType` = 'page'
AND u.`ID` = '23' AND p.`ID` = '18'
LIMIT 1

and if you get a result, the user has access; if you get an empty set, the
user has no access.

By not COUNTing and using a limit 1, you let the database answer your
question without examining more rows than it needs to.

Make sense?

-Dan


On Fri, Feb 29, 2008 at 9:31 PM, Chris W [EMAIL PROTECTED] wrote:

 I was wondering if someone could tell me what things I might need to do
 to make this query as fast as possible.

 I am developing a web site where users will have access to certain
 things based on what groups they are in and what groups have access to
 certain things.  There are several different types things they have
 access to based on the group but for this discussion lets limit it to
 pages.  For a user to view a page they have to be associated with one or
 more of the groups that the page is linked to.

 Since the relation ship between pages to groups and users to groups is
 many to many I have a table just for that relationship.  So here are my
 tables
 Group:
  contains various info about the group with key field GroupID.
 User:
  contains various info about the user along with the key field UserID.
 Page:
  contains various info about a page on the site along with it's PageID.

 GroupLink:
 CREATE TABLE  `grouplink` (
  `LinkType` set('user','page','template','templatefile','menu') NOT
 NULL default '',
  `ID` int(10) unsigned NOT NULL default '0',
  `GroupID` int(10) unsigned NOT NULL default '0',
  PRIMARY KEY  (`LinkType`,`ID`,`GroupID`)
 ) ;

 Since there are several things that will be linked to groups I decided
 to use one table to create all links and the The LinkType field to
 designate which think we are linking to a group.

 For example suppose I had page 18 linked to group 2, 5, 6, and 7 and I
 had User 23 linked to group 1, 2, and 9.  The rows in the table would be
 like this

 group, 18, 2
 group, 18, 5
 group, 18, 6
 group, 18, 7
 user, 23, 1
 user, 23, 2
 user, 23, 9

 Now I want to know if user 23 can access page 18 so I execute this query

 SELECT COUNT(`GroupID`)
 FROM `grouplink` u
 JOIN `grouplink` p USING(`GroupID`)
 WHERE u.`LinkType` = 'user' AND p.`LinkType` = 'page'
 AND u.`ID` = '23' AND p.`ID` = '18'

 Since User 23 and Page 18 are both linked to group 2, COUNT(`GroupID`)
 should return 1.
 The way the rules I have set work, if the count is 1 or larger then that
 user has access to the page.

 Now the question is there anything I can do to make this query faster?

 --
 Chris W
 KE5GIX

 Protect your digital freedom and privacy, eliminate DRM,
 learn more at http://www.defectivebydesign.org/what_is_drm;

 Ham Radio Repeater Database.
 http://hrrdb.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]