Re: [solved]LOAD DATA INTO doesn't work correctly with utf8

2007-08-31 Thread Harald Vajkonny
Hi Ananda,

Ananda Kumar schrieb:

 So you set the collation_database=utf8_bin, what was your
 character_set_database values.
character_set_database is utf8. The collation utf8_bin slows down
queries, but is necessary in dealing with multilingual information.
utf8_general_ci is faster, but can not distinguish in keys between
symbols which are sorted at the same position in national character
sets, like e.g. German a and ä, or French e and é.

Regards,
Harald

-- 
Wouldn't the sentence 'I want to put a hyphen between the words Fish
and And and And and Chips in my Fish-And-Chips sign' have been clearer
if quotation marks had been placed before Fish, and between Fish and
and, and and and And, and And and and, and and and And, and And and
and, and and and Chips, as well as after Chips?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [solved]LOAD DATA INTO doesn't work correctly with utf8

2007-08-31 Thread Ananda Kumar
Okie, i will also try this, as we also load data from a flat file.

regards
anandkl


On 8/31/07, Harald Vajkonny [EMAIL PROTECTED] wrote:

 Hi Ananda,

 Ananda Kumar schrieb:
 
  So you set the collation_database=utf8_bin, what was your
  character_set_database values.
 character_set_database is utf8. The collation utf8_bin slows down
 queries, but is necessary in dealing with multilingual information.
 utf8_general_ci is faster, but can not distinguish in keys between
 symbols which are sorted at the same position in national character
 sets, like e.g. German a and ä, or French e and é.

 Regards,
 Harald

 --
 Wouldn't the sentence 'I want to put a hyphen between the words Fish
 and And and And and Chips in my Fish-And-Chips sign' have been clearer
 if quotation marks had been placed before Fish, and between Fish and
 and, and and and And, and And and and, and and and And, and And and
 and, and and and Chips, as well as after Chips?




Database page corruption on disk occurring during mysqldump on a fresh database

2007-08-31 Thread Maurice Volaski
A 64-bit Gentoo Linux box had just been upgraded from MySQL 4.1 to 
5.0.44 fresh (by dumping in 4.1 and restoring in 5.0.44) and almost 
immediately after that, during which time the database was not used, 
a crash occurred during a scripted mysqldump. So I restored and days 
later, it happened again. The crash details seem to be trying to 
suggest some other aspect of the operating system, even the memory or 
disk is flipping a bit. Or could I be running into a bug in this 
version of MySQL?


Here's the output of the crash
---
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 533.
InnoDB: You may have to recover from a backup.
070827  3:10:04  InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex

[dump itself deleted for brevity]

 ;InnoDB: End of page dump
070827  3:10:04  InnoDB: Page checksum 646563254, 
prior-to-4.0.14-form checksum 2415947328
InnoDB: stored checksum 4187530870, prior-to-4.0.14-form stored 
checksum 2415947328

InnoDB: Page lsn 0 4409041, low 4 bytes of lsn at page end 4409041
InnoDB: Page number (if stored to page already) 533,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be an index page where index id is 0 35
InnoDB: (index PRIMARY of table elegance/image)
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 533.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also InnoDB: 
http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html

InnoDB: about forcing recovery.
InnoDB: Ending processing because of a corrupt database page.

--

Maurice Volaski, [EMAIL PROTECTED]
Computing Support, Rose F. Kennedy Center
Albert Einstein College of Medicine of Yeshiva University

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Server hangs on get_lock

2007-08-31 Thread Niklas Westerberg
Hello,
We seem to have a problem with the usage of GET_LOCK on our mysql
server. Is there anyone who has experienced similar behavior, or could
provide some insight into what is going on?

/Niklas

Symptoms:
* mysqld CPU-usage is 100%
* Queries of the type GET_LOCK('lock_name', 10); seems to abound in the
mysql process list. These remain in the list for far longer than the
expected 10 seconds. At one instance the number of queries exceeded 600,
all of which had been active between 400 and 600 seconds. As it happens
to be a GET_LOCK query is the first one executed by our web application
on each request. There were also some RELEASE_LOCK queries in the list.

* The number of queries hanging on the list happened to exactly match
the maximum number of concurrent requests from the web servers.

* The queries remain for a time in the process list even after the web
servers (apache/php) has been taken down.

* The database seems to exhibit a slow decline in performance between
the point in time of its latest restart and a full stop. This has not
been thoroughly investigated yet however.

* Accessing the server through the CLI still works and regular queries
return as expected.

Occurrence:
* Intermittent, sometimes weeks apart, sometimes once day for a few days
in a row. 

* Without an apparent correlation with the load of the machine.

Remedy until now:
Restarting mysqld and after that the apache processes, which sometimes
start to die of segmentation faults if this is neglected.

Configuration:
Server:
2 dual-core process (i.e. theoretical maximum cpu usage of 400%)
Tested on Mysql 5.0.44 and 5.0.38
Both the MyISAM and the InnoDB engines are in use (mostly InnoDB).
InnoDB uses a raw partition residing on a software raid level 10
consisting of 10 disks.

Clients:
Apache 2.2.4 (mpm-prefork)
PDO 1.0.3
PDO-MYSQL 1.0.2
PHP 5.2.1


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



JOIN migration from Oracle to MySQL

2007-08-31 Thread spikerlion
Hello,

I've two LEFT OUTER JOINS in the WHERE section in an Oracle script like:

select ...
from...
where
...
and PT1.ID (+) = bl.PARENTTYPE_1
and PT2.ID (+) = bl.PARENTTYPE_2
...


MySQL knows LEFT OUTER JOINS in the FROM section but two joins with the same 
table aren't accepted.

example:
select ...
from
tableA PT1 LEFT OUTER JOIN tableC bl ON (PT1.ID = bl.PARENTTYPE_1), 
tableB PT2 LEFT OUTER JOIN tableC bl ON (and PT2.ID = bl.PARENTTYPE_2),
...

- doesnt' work.


Exits a solution for this example? Other syntax possibilities?


regards, 
Spiker
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: converting some rows from utf-8 to iso-8859-1

2007-08-31 Thread Olav Mørkrid
yep, thru php it's easy: just do a utf8_decode($value). but is it
possible directly from the mysql command line.

On 31/08/2007, Ken Peng [EMAIL PROTECTED] wrote:

 On Thu, 30 Aug 2007 11:41:14 +0200, Olav Mørkrid
 [EMAIL PROTECTED] said:
  if a table column is supposed to contain text in iso-8859-1, but utf-8
  encoding have snuck in on a few rows by mistake, how are these rows
  converted into iso-8859-1?
 
  what i am looking for is something like this:
 
  update mytable set mycolumn = utf8toiso88591(mycolumn) where id
  between 500 and 600;
 

 I don't know if mysql has this charset converting routine.
 But you can do it by application scripts,ie,in perl you can convert them
 by,

 use Encode;
 my $iso_str = encode('iso-8859-1',decode('utf8',$utf8_str));

 first you select utf8 string from db one by one and store it to
 $utf8_str,then use the function above to be converted to iso-8859-1,then
 insert it again to mysql.
 --
   Ken Peng
   [EMAIL PROTECTED]

 --
 http://www.fastmail.fm - Accessible with your email software
   or over the web



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Two/more seperately unique columns in a table?

2007-08-31 Thread Esbach, Brandon
Hi all,

I'm guessing this is a fairly common question, but I've drawn a blank so
far with all workarounds possible.
Here's the rough scenario:
We have data streaming in constantly from various offsite locations.
This comprises of several details: a session, a unit of that session,
and a measurement of that unit.
Data is captured and transferred on a timed process, so often data is
repeated in the data transfer packets.  This repeating is unavoidable as
the software used to capture dumps this data for each measurement and
each unit for the session it's working on.

Due to the volume, a bulk update is done using values() with an insert
statement.

Unfortunately, often there are repeats of either session, unit, or
measurement (as there could be new measurement or unit for the capture
that needs to be associated to the session).
The problem I've been experiencing is fairly straightforward (I hope): I
have two, sometimes three columns in any given record that need to
always be unique.  This comprises an ID (the key column), and one
(depending on the table, sometimes two) GUIDs which should be unique at
all times for the entire table.

I've tried setting the additional columns to be a primary key (which in
turn sets them up to be unique when viewed under Schema Indices on
the MySQL Administrator tool); however this does not give an error (or
fail silently with insert ignore) when I insert a duplicate - mySQL
seems quite happy to add the duplicate record.

At the moment, I'm running a process in the beginning which simply
gathers all guids from the database and compares them as it runs through
the data (then adds new ones as it runs).. This is hardly reliable, and
also means starting the service would take several hours to gather the
existing guids at current data levels... almost frightening to think,
what will end up happening as the data expands.

I'm hoping that I'm just missing something really daft and that there is
a much easier way to ensure several columns are always unique in my
table, while still benefitting from the bulk load insert?


Details on installation used:
MySQL version:   v 4.1.14 (scheduled for upgrade in 2008)
OS: Windows Server 2003 (std edition)
Memory: 2GB




Fwd: Calendar event query

2007-08-31 Thread abhishek jain
On 8/30/07, C.R.Vegelin [EMAIL PROTECTED] wrote:

 Hi Abhishek,

 Maybe it's off topic, but do you know:
 http://dev.mysql.com/downloads/other/eventum/
 a ready-to-use issue tracking system ...
 maybe also suitable for your purposes.

 Regards, Cor

 - Original Message -
 From: abhishek jain [EMAIL PROTECTED]
 Subject: Calendar event query


  Hi friends,
  I need to make a calendar with some events.
  I have created an events table with the parameters like:
  eventid
  event_from
  event_till
  recurring
  recurring_after_day
  event_type
  userid
 
  and so on.
  My problem is to prepare a query which can give me events on a
 particular
  day or rather all days(in case of calendar view is daily) ,months(if
 view
  is
  monthly) and so on for year week including for events which are repeated

  also, so that i can show that on my calendar.The query will run on a
  fairly
  large no of users also so it should be efficient.
 
  I think the catch is about the recurring events. As for them the entry
  will
  be in one row and they will have virtually many rows one for each
  recurring
  event.
 
  So i want somthing like this for a particular userid :
  event_on count(*) event_type
  235 7
  237 8
  246 6
  254  6
  26   10 6
  27   15
  .
 
  Pl. help me
  Thanks,
  Abhishek jain
 



Hi ,
eventum will not solve my purpose.
Pl. reply the calendar script is urgent and i need a query to actually give
me all events within a time period including the recurring events,
Thanks,
Abhishek jain


RE: Two/more seperately unique columns in a table?

2007-08-31 Thread Esbach, Brandon
Hi, and thanks Baron;
I should have been a bit clearer on the bulk insert - I am using a bulk
insert statement, as you assumed.
I'll put this onto the db server and check, I think that's a more future
proof method.  Will this affect any of my linked tables (linked via the
row's primary key(id))?

-Original Message-
From: Baron Schwartz [mailto:[EMAIL PROTECTED] 
Sent: 31 August 2007 14:28
To: Esbach, Brandon
Cc: MySQL User Group
Subject: Re: Two/more seperately unique columns in a table?

Hi,

Esbach, Brandon wrote:
 Hi all,
 
 I'm guessing this is a fairly common question, but I've drawn a blank 
 so far with all workarounds possible.
 Here's the rough scenario:
 We have data streaming in constantly from various offsite locations.
 This comprises of several details: a session, a unit of that session, 
 and a measurement of that unit.
 Data is captured and transferred on a timed process, so often data is 
 repeated in the data transfer packets.  This repeating is unavoidable 
 as the software used to capture dumps this data for each measurement 
 and each unit for the session it's working on.
 
 Due to the volume, a bulk update is done using values() with an 
 insert statement.
 
 Unfortunately, often there are repeats of either session, unit, or 
 measurement (as there could be new measurement or unit for the capture

 that needs to be associated to the session).
 The problem I've been experiencing is fairly straightforward (I hope):

 I have two, sometimes three columns in any given record that need to 
 always be unique.  This comprises an ID (the key column), and one 
 (depending on the table, sometimes two) GUIDs which should be unique 
 at all times for the entire table.
 
 I've tried setting the additional columns to be a primary key (which 
 in turn sets them up to be unique when viewed under Schema Indices

 on the MySQL Administrator tool); however this does not give an error 
 (or fail silently with insert ignore) when I insert a duplicate - 
 mySQL seems quite happy to add the duplicate record.
 
 At the moment, I'm running a process in the beginning which simply 
 gathers all guids from the database and compares them as it runs 
 through the data (then adds new ones as it runs).. This is hardly 
 reliable, and also means starting the service would take several hours

 to gather the existing guids at current data levels... almost 
 frightening to think, what will end up happening as the data expands.

It sounds like you need a separate primary key and unique index:

create table t (
id int not null,
guid char(32) not null,
unique key (guid),
primary key(id)
);

Then you can do REPLACE or IGNORE with the LOAD DATA INFILE.  I can't
tell if you are actually using LOAD DATA INFILE or if your bulk load 
is a big INSERT statement.  If you're using an INSERT with multiple
VALUES() sections, you can also use ON DUPLICATE KEY UPDATE.

I agree the current strategy won't hold up well over time.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN migration from Oracle to MySQL

2007-08-31 Thread Baron Schwartz

[EMAIL PROTECTED] wrote:

Hello,

I've two LEFT OUTER JOINS in the WHERE section in an Oracle script like:

select ...
from...
where
...
and PT1.ID (+) = bl.PARENTTYPE_1
and PT2.ID (+) = bl.PARENTTYPE_2
...


MySQL knows LEFT OUTER JOINS in the FROM section but two joins with the same 
table aren't accepted.

example:
select ...
from
tableA PT1 LEFT OUTER JOIN tableC bl ON (PT1.ID = bl.PARENTTYPE_1), 
tableB PT2 LEFT OUTER JOIN tableC bl ON (and PT2.ID = bl.PARENTTYPE_2),


The exact error message would be helpful, but I'm seeing at least two 
problems:


1) you're aliasing two tables as 'bl'.  The aliases need to be unique.
2) The second ON clause shouldn't start with AND.

Otherwise you should have no problem doing this.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Two/more seperately unique columns in a table?

2007-08-31 Thread Baron Schwartz

Hi,

Esbach, Brandon wrote:

Hi all,

I'm guessing this is a fairly common question, but I've drawn a blank so
far with all workarounds possible.
Here's the rough scenario:
We have data streaming in constantly from various offsite locations.
This comprises of several details: a session, a unit of that session,
and a measurement of that unit.
Data is captured and transferred on a timed process, so often data is
repeated in the data transfer packets.  This repeating is unavoidable as
the software used to capture dumps this data for each measurement and
each unit for the session it's working on.

Due to the volume, a bulk update is done using values() with an insert
statement.

Unfortunately, often there are repeats of either session, unit, or
measurement (as there could be new measurement or unit for the capture
that needs to be associated to the session).
The problem I've been experiencing is fairly straightforward (I hope): I
have two, sometimes three columns in any given record that need to
always be unique.  This comprises an ID (the key column), and one
(depending on the table, sometimes two) GUIDs which should be unique at
all times for the entire table.

I've tried setting the additional columns to be a primary key (which in
turn sets them up to be unique when viewed under Schema Indices on
the MySQL Administrator tool); however this does not give an error (or
fail silently with insert ignore) when I insert a duplicate - mySQL
seems quite happy to add the duplicate record.

At the moment, I'm running a process in the beginning which simply
gathers all guids from the database and compares them as it runs through
the data (then adds new ones as it runs).. This is hardly reliable, and
also means starting the service would take several hours to gather the
existing guids at current data levels... almost frightening to think,
what will end up happening as the data expands.


It sounds like you need a separate primary key and unique index:

create table t (
id int not null,
guid char(32) not null,
unique key (guid),
primary key(id)
);

Then you can do REPLACE or IGNORE with the LOAD DATA INFILE.  I can't 
tell if you are actually using LOAD DATA INFILE or if your bulk load 
is a big INSERT statement.  If you're using an INSERT with multiple 
VALUES() sections, you can also use ON DUPLICATE KEY UPDATE.


I agree the current strategy won't hold up well over time.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



thread_concurrency in linux

2007-08-31 Thread Andrew Braithwaite
Hi,

Does anyone know if thread_concurrency works in linux or is it just
limited to Solaris and Windows?

I know the general rule is number of CPU's*2 but will this actually have
any effect with Linux's threading model?

Thanks for any help :)

Andrew

Mysql, query


This message has been scanned for viruses by BlackSpider MailControl - 
www.blackspider.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN migration from Oracle to MySQL

2007-08-31 Thread spikerlion
Hello,

thank you - now it works.

d_parenttype PT1 LEFT OUTER JOIN t_booklists bl ON (PT1.ID = bl.PARENTTYPE_1),
d_parenttype PT2 LEFT OUTER JOIN t_booklists bk ON (PT2.ID = bk.PARENTTYPE_2)


I had to put the alias to all listet fields in the select.


regards,
Spiker



 Original-Nachricht 
 Datum: Fri, 31 Aug 2007 09:30:13 -0400
 Von: Baron Schwartz [EMAIL PROTECTED]
 An: [EMAIL PROTECTED]
 CC: mysql@lists.mysql.com
 Betreff: Re: JOIN migration from Oracle to MySQL

 [EMAIL PROTECTED] wrote:
  Hello,
  
  I've two LEFT OUTER JOINS in the WHERE section in an Oracle script like:
  
  select ...
  from...
  where
  ...
  and PT1.ID (+) = bl.PARENTTYPE_1
  and PT2.ID (+) = bl.PARENTTYPE_2
  ...
  
  
  MySQL knows LEFT OUTER JOINS in the FROM section but two joins with the
 same table aren't accepted.
  
  example:
  select ...
  from
  tableA PT1 LEFT OUTER JOIN tableC bl ON (PT1.ID = bl.PARENTTYPE_1), 
  tableB PT2 LEFT OUTER JOIN tableC bl ON (and PT2.ID = bl.PARENTTYPE_2),
 
 The exact error message would be helpful, but I'm seeing at least two 
 problems:
 
 1) you're aliasing two tables as 'bl'.  The aliases need to be unique.
 2) The second ON clause shouldn't start with AND.
 
 Otherwise you should have no problem doing this.

-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



hardware clusters

2007-08-31 Thread Sid Lane
all,

I am working on a budget proposal for next year to put in a MySQL cluster
but wanted to validate (or correct) a couple of assumptions:

1.  do storage nodes benefit far more from additional RAM than they do from
faster CPUs/multiple cores?

2.  do SQL nodes benefit more from faster CPUs/multiple cores?

basically, I am trying to find the best bang/buck configuration for each
server type and am assuming it's better to spend more $ on RAM for storage
nodes and CPU for SQL nodes - is this correct?

is anyone aware of any benchmarking that has been done with various
configurations?

any help appreciated...

thanks!


excessive time spent in statistics status

2007-08-31 Thread Lucio Chiappetti
I have some queries, involving a largish number of JOIN, which 
are apparently very slow or even take forever (a mysqladmin processlist 
shows them remain in the statistics status for a long time, in most 
cases I have to kill them after several minutes).

When I first had the problem I googled around and found some reference 
(which I've lost) saying that the statistics status is actually what one 
does with an EXPLAIN SELECT, and that this is done preliminarily to the 
actual query. It also said it might occur with a large number of joins 
because this analysis, for n joins MIGHT try up to n! combinations, unless 
one somehow specified the priorities (but the author did not remember 
how).

I thought to have overcome the problem using a feature of the CREATE 
VIEW command (see below), but apparently I simply moved it to an higher n.

Now I tried to see how it scales with the number of joins, and the curious 
things is that e.g. for n=9 it works fast, for n=15 it works slowly, for 
n=18 works fast again and for n=20 takes an infinite time.

I'll first explain my background :

 - I have a number of tables (let's call them t0, t1, t2 ...)
 - all of them have an auto_increment column called seq which is also
   an index

 - one table (t0) is more important (actually it is list of
   celestial X-ray sources while the other are celestial objects
   in other wavebands but this is irrelevant to you).

 - I have precomputed correlation tables among t0 and each of
   the other. These tables are called eg. t0ti, have two columns
   t0 and ti corresponding to the t0.seq and ti.seq of objects which
   are associated. They are indexed on (t0,ti). Note that an
   object in t0 can be associated with 1 or more or zero (t0ti.ti null)
   objects in ti.

 - I originally (already under mysql 3) devised a way to identify 
   counterparts in MORE tables (all these associations are based on 
   spherical distance + other criteria). This involved creating a
   working table G

   This table has columns named t0 t1 ... tn (containing the pointers
   t0.seq t1.seq ... for counterparts associated and validated according 
   to some criteria) plus other service columns

   The simultaneous access was achieved in our interface by a mechanism
   we called virtual tables, which essentially was

 SELECT
  some subset of columns in some of the t0...tn
  or some expression thereof
 FROM
  G left join t0 on G.t0=t0.seq
left join t1 on G.t1=t1.seq
...
left join tn on G.tn=tn.seq

   We refer to the t0...tn as the member tables of G.

   We have different versions of G corresponding to different sets
   of member tables and different association criteria.

   The largest of our cases has 26 different members.

   Our mechanism was such that we defined a subset of columns in
   each of the ti (or expressions thereof like distances etc.) as
   interesting, with an associated alias. Our interface usually
   showed only such virtual columns, but had a possibility to add
   (naming them manually as ti.colname) to the SELECT also all
   other member columns normally hidden.

   We also allow to correlate a virtual table with a single physical
   table tk (be it member or not) using the t0tk correlation table
   (t0 is the First Member).

 - the above worked and still works, but has some clumsiness. When
   we upgraded to mysql 5 and discovered the CREATE VIEW command
   we decided to replace our virtual tables with views.

 - for each G we define a view as

 create algoritm=temptable view V as
 SELECT
  some subset of columns in some of the t0...tn or in G
  or some expression thereof
 FROM
  G left join t0 on G.t0=t0.seq
left join t1 on G.t1=t1.seq
...
left join tn on G.tn=tn.seq

 - the algorithm=temptable was required because without it some
   of our queries (see below) entered in the statistics status forever 
   already with 11 members
   
 - for the rest the VIEWs work nicely when used standalone and are easier
   for the user ...

 - ... but on the other hand they HIDE the member columns which are
   not explicitly named in CREATE VIEW (where one wants to keep a
   manageable number of columns). Hide means here that their names
   ti.colname cannot be used in SELECT !   

 - so we devised an option by which on ticking on show members also
   one can also include these ti.colname in the query
   
   de facto this doubles the joins, because the statement built is

   SELECT 
list of (V.colname and ti.colname with i chosen among 0 and n)
   FROM
( G left join t0 on G.t0=t0.seq
left join t1 on G.t1=t1.seq
...
left join tn on G.tn=tn.seq
)   left join V  on G.seq=V.seq 

   This statement NOW works (it did not work with e.g. 11 member tables
   before we switched to ALGORITHM=TEMPTABLE in the CREATE VIEW).

   An explain select for a query on such views gives that a view with
   n members with members also 

Re: hardware clusters

2007-08-31 Thread Ricardo Oliveira
Hi,

 As usual, everything is heavilly dependant on your specific scenario.
 Anyway, as a rule of thumb, databases benefit a LOT from RAM, and storage nodes
benefit from I/O (more, faster disks).


Regards,
Ricardo

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Fwd: hardware clusters

2007-08-31 Thread Michael Dykman
That is not a question that can be answered directly... it come down
to exactly how much data you expect to be handling, how many nodes you
plan on using and what your proposed node configuration might be..
generally, a lot of RAM always helps and in a RAM-based solution like
NDB, of course it's likely to come in handy, but if your data is
small, adding RAM will have no effect.  As for the effectiveness of
multi-cores, again, what kind of concurrency load are you expecting?

The question is a fairly complex one and has everything to do with
ones particular circumstances: that is why there are no simple
answers.


On 8/31/07, Sid Lane [EMAIL PROTECTED] wrote:
 all,

 I am working on a budget proposal for next year to put in a MySQL cluster
 but wanted to validate (or correct) a couple of assumptions:

 1.  do storage nodes benefit far more from additional RAM than they do from
 faster CPUs/multiple cores?

 2.  do SQL nodes benefit more from faster CPUs/multiple cores?

 basically, I am trying to find the best bang/buck configuration for each
 server type and am assuming it's better to spend more $ on RAM for storage
 nodes and CPU for SQL nodes - is this correct?

 is anyone aware of any benchmarking that has been done with various
 configurations?

 any help appreciated...

 thanks!



--
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.


-- 
 - michael dykman
 - [EMAIL PROTECTED]

 - All models are wrong.  Some models are useful.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN migration from Oracle to MySQL

2007-08-31 Thread Shawn Green

[EMAIL PROTECTED] wrote:

Hello,

thank you - now it works.

d_parenttype PT1 LEFT OUTER JOIN t_booklists bl ON (PT1.ID = bl.PARENTTYPE_1),
d_parenttype PT2 LEFT OUTER JOIN t_booklists bk ON (PT2.ID = bk.PARENTTYPE_2)


I had to put the alias to all listet fields in the select.



Unless you are relating PT1 to PT2 in some way, you should not expect 
this query to perform well because you will be generating a Cartesian 
product between PT1 and PT2. I doubt this is actually what you are 
trying to do (although it will eventually work). If you posted just a 
few more details about the query you are trying to write, we could try 
to help you to rewrite it in a way that will perform much better than 
the translation you just attempted.

--
Shawn Green, Support Engineer
MySQL Inc., USA, www.mysql.com
Office: Blountville, TN
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /
  / /|_/ / // /\ \/ /_/ / /__
 /_/  /_/\_, /___/\___\_\___/
___/
 Join the Quality Contribution Program Today!
 http://dev.mysql.com/qualitycontribution.html

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MyISAMChk has trashed all my .myd and left me with .tmd

2007-08-31 Thread James Halliday
Had a minor crisis this morning. One of my database tables had become
corrupted.
As I've done quite a few times, I shut down the db, ran myisamchk -r and it
seemed to repair the table.
Shortly afterwards it corrupted again - seemed a little strange, but decided
I'd give it another go and ran myisamchk on all tables (just to make sure).
Again it reported success and I logged in through PHP admin, not a pretty
sight - everything appeared to be broken.
Back to my datafiles and had a horrible moment of realization when I noticed
I had no .myd files any more - all gone. In their place a .tmd file.
Little bit of googling reports that these are temp files, quick look in them
seems to find something that looks like my data (this is an hmailserver db
that's crashed and died), but doesn't offer any more info.
Played with a few data recovery tools and they reported they could see
deleted myd files with a 'size' but recovery failed with a variety of error
messages.

Just wondering if anybody could help me out. I've moved though shock, anger,
grief and am now just accepting the loss of many email accounts over many
years. I was hoping that if these were temp files, then mysql should have
the ability to transform them back into myd files - but so far no luck and
no information.

Any help/pointers/alcohol would be much appreciated.


ANN: DBForms from MS Access to PHP + MySQL v.2.1.1 released

2007-08-31 Thread Dmitry Narizhnykh
DBConvert.com announces DBForms from MS Access to PHP + MySQL v.2.1.1 
release.


Our application allows you to convert mdb (Microsoft Access databases) and 
MS Access forms to AJAX based WEB page using PHP and MySQL. DBForms for MS 
Access to PHP + MySQL supports elements in MS Access forms such as Label, 
Images as OLE types, Images as form elements, OptionGroup, TextBox, ListBox, 
ComboBox, CommandBoxm, Line, Tabular Control, and Subforms.


The solution helps you convert forms (Single Form, Continuous Forms, 
Datasheet), and their elements to Web page and filling form fields with data 
from the fields of your Access tables. During conversion elements' position, 
color pallet and fields type are converted accurately. Unicode support, 
Primary keys and indexes conversion available. You have the opportunity to 
place your forms on a Web page directly through FTP connection or save your 
forms to a local folder on your computer for future uploading to FTP.


Please, find more info about the software at:
http://dbconvert.com/convert-accessforms-to-web-mysql.php

You can download evaluation copy of DBForms from MS Access to PHP + MySQL 
v.2.1.1 from:

http://dbconvert.com/downloads/dbforms_php_mysql.zip

We hope you will enjoy working with our software.


Sincerely yours,
DBConvert.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: BUG: DATE_ADD 99999 fails, but 9999 works.

2007-08-31 Thread Daevid Vincent
So, I found out why we had to switch all of our datetimes to timestamps:

http://dev.mysql.com/doc/refman/5.0/en/time-zone-support.html

The current time zone setting does not affect values in DATE, TIME, or
DATETIME columns.

So it appears this is a catch 22... In order to store things in UTC and then
use the user's local TZ, you need to use 'timestamps'. But then you can't
use them for years past 2038... However 'datetime' will store any date, but
you can't store in UTC and display via the TZ setting. 

:-\

 -Original Message-
 From: Daevid Vincent [mailto:[EMAIL PROTECTED] 
 Sent: Monday, August 27, 2007 6:09 PM
 To: 'MySQL General'
 Cc: 'Chris'
 Subject: RE: BUG: DATE_ADD 9 fails, but  works.
 
  -Original Message-
  From: Chris [mailto:[EMAIL PROTECTED] 
  Sent: Monday, August 27, 2007 5:45 PM
 
  I don't think this is a bug. I think what's happening is that your 
  timestamp column can't hold that date, it's max value is 
  somewhere in 2038.
 
 You appear to be correct, burried in the plethora of bullet 
 points here:
 http://dev.mysql.com/doc/refman/5.0/en/datetime.html
 For example, TIMESTAMP values cannot be earlier than 1970 or 
 later than
 2038.
 
 So that _is_ the root cause of the problem, but it's still a bug. 
 
 There is no reason (from a mySQL user/PHP developer's 
 perspective) that
 2038 should be my upper year limit. I should be able to make 
 any date up
 to -12-31 !!?
 
 This is absurd. We're making enterprise level tools that run at US
 Government offices, The entire state of Alaska, Military, Colleges,
 Fortune 500 companies You mean in 21 years from now, all this will
 just fail miserably because of some obscure 2038 limitation? 
 This is Y2K
 all over again -- unless mySQL fixes this bug.
 
  So I guess either change your timestamp column to a 
 datetime column, 
 
 Interesting thing with that, we used to use datetime columns (where
 applicable) but since we store everything in UTC now, as this 
 product is
 international, we had to switch (painfully I might add) to 
 timestamp. I
 forget the exact reason, and this was about a year ago, so it may be
 moot now anyways -- it had to do with using mySQL's 
 conversion routines
 so the dates would display in the GUI for the user's local 
 timezone they
 set in their profile, and those routines didn't work on 
 datetime or some
 such nonsense.
 
  or prevent users from putting invalid data in.
 
 I've limited the text field to 4 digits from 5. but that doesn't make
 this any less of a mySQL issue, that's just a band-aid to mask an
 inadequacy of the RDBMS.
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: servers full potential / FT searches locking tables

2007-08-31 Thread Justin

Alright.. I think I see what's is happening after this latest lockup..

here's what I think is happening..

When a replace into query locks a table for a few seconds there are a boot 
load of connections to the db, and then when the table is unlocked the 
connections start to filter through and usually they all finish and de-queue 
nicely but this last time it seemed there were 400-500 constant connections 
never actually going away.. the query it's self finished. but there was one 
right behind it to take it's place..


Almost like it's giving it's self a dos..  Is there any settings I can 
adjust on the server to help with this? or would it be more on the code 
side.


As always when I restart the instance of mysql all goes back smoothly so it 
makes me wonder if it's something in the mysql config that is lagging for 
some reason.


thanks.


- Original Message - 
From: Michael Dykman [EMAIL PROTECTED]

To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 1:31 PM
Subject: Re: servers full potential / FT searches locking tables


No, I'm afraid not.  32 bit architectures have a theoretical limit of
4G of memory space for the entire application: in actual practice, for
a variety of reasons too complex to go into here (and are well
documented elsewhere) your key buffer should be limited to around 2.5G
max, and this is assuming a pure MyISAM implementation.  There simply
is no way a 32 bit build can make use of all that RAM, regardless of
OS.

- michael dykman


On 8/28/07, Justin [EMAIL PROTECTED] wrote:

32bit, but I have all available memory..

MemTotal:  8179612 kB
MemFree: 43684 kB

on the box.   I think the 4gb is only windows.

All my tables are in myisam

so if I was to set
key_buffer_size=5500M

That'd be acceptable?

- Original Message -
From: Mathieu Bruneau [EMAIL PROTECTED]
To: Justin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Tuesday, August 28, 2007 12:51 AM
Subject: Re: servers full potential / FT searches locking tables


 Your settings doesn't seem optimized much.

 So here first question, do you use 32bits or 64 bits platform? If you 
 have
 64 bits platform with 64 bits mysql and os you can boost most the 
 settings

 to use almost the 8G of ram you have on the server. If you are using
 32bits you will have to do some calculation so you don't go over ~2.6G
 (why not 4Gb?, go read on that on the net)

 So the 2 most importants settings are:
 key_buffer_size (mainly myisam table)
 and/or
 innodb_buffer_pool_size (innodb table)

 Depending if you're using more innodb or myisam (or a mix) you'll tweak
 those pamareters differently, it's usually however not recommended to go
 over 4Gb for the key_buffer_size. MyIsam only stores the key into that
 buffer, so you don't have much index, not worth taking it too big for no
 reason. Innodb however can cache data as well, and will benefit from the
 biggest value possible.

 The server generate statistic that you can look to know the effect of
 that. If you are using phpmyadmin in the variables and status part you 
 can

 see the index usage to guide you.


 You can have a look at the different my.cnf that comes with mysql
 distribution they put comment in there with interesting value for thumbs
 rule. Here the except for key_buffer_size and innodb_buffer_pool_size:
 # Size of the Key Buffer, used to cache index blocks for MyISAM tables.
 # Do not set it larger than 30% of your available memory, as some memory
 # is also required by the OS to cache rows. Even if you're not using
 # MyISAM tables, you should still set it to 8-64M as it will also be
 # used for internal temporary disk tables.
 key_buffer_size=2G

 # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
 # row data. The bigger you set this the less disk I/O is needed to
 # access data in tables. On a dedicated database server you may set this
 # parameter up to 80% of the machine physical memory size. Do not set it
 # too large, though, because competition of the physical memory may
 # cause paging in the operating system.  Note that on 32bit systems you
 # might be limited to 2-3.5G of user level memory per process, so do not
 # set it too high.
 innodb_buffer_pool_size=2G

 Regards,
 --
 Mathieu Bruneau
 aka ROunofF

 ===
 GPG keys available @ http://rounoff.darktech.org

 Justin a écrit :
 Ok.. Straight to the point.. Here is what I currently have.

 MySQL Ver 14.12 Distrib 5.0.27
 RHEL vs 5
 584GB Raid 5 storage
 8GB of RAM
 and Dual 5130 processors (2.0GHz Intel Dual-Core Xeon)

 what my question is.. is am I utilizing the servers potential with the
 following as my settings.  The server is a dedicated MySQL server so I
 want all power to go to the server. It just seems to be laggy at times.
 And I want to be sure I've optimized to the fullest potential

 My biggest issue is with FT searches. Tables get locked during larger
 queries and I can't select anything when that happens. Is there any way
 not