How to admin a server that currently has too many connections?

2004-08-31 Thread Justin Swanhart
I am creating an index on a very large innodb table that is taking a
very long time to complete which I understand is a limitation of
innodb.

The problem is that another application has been issuing queries
against the table and those queries have never timed out.  So now I
can't kill the connections, because I can't make a database connection
to mysql to even see their thread ids:

[EMAIL PROTECTED] mysql]$ mysqladmin -u root -p processlist
Enter password:
mysqladmin: connect to server at 'localhost' failed
error: 'Too many connections'

I don't want to kill the mysqld process, because that would stop my
index creation.

Is there anything you can do in this situation?  I don't think
connections from mysqladmin should ever be denied.  Is this a feature
request that I should make?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problem with management server communication.

2004-08-17 Thread Justin Swanhart
I have the same problem.  I have an open bug report here:
http://bugs.mysql.com/?id=4761

I can't figure out why my management server can not see my ndb
servers.  Perhaps I should try to down the secondary interface on all
the servers?

On Tue, 17 Aug 2004 15:54:29 -0400, Wheeler, Alex [EMAIL PROTECTED] wrote:
 Yes, now I remember.  This happens when I have either 2 interfaces or a
 disagreement on hostname.  If the clients don't see the server or
 themselves by the same hostname as the server, this type of error can be
 seen.
 I've been setup with a node that did the initial communication via eth1
 but then tried the remainder via eth0 which corresponded with the
 hostname, and which obviously didn't work.
 I ended up temporarily changing the hostname to reference the eth1 ip.
 
 --
 Alex Wheeler
 
 -Original Message-
 From: IHLING, CHRIS G (CHRIS) [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 17, 2004 3:43 PM
 To: IHLING, CHRIS G (CHRIS)
 Cc: [EMAIL PROTECTED]
 Subject: RE: Problem with management server communication.
 
 I have no problem pinging either box from both. There is one other
 strange thing. If I try to use a fully qualified hostname or ip address
 in the config file I get an error when I try to start the ndbd.
 
 -Original Message-
 From: Jonas Oreland [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 17, 2004 11:42 AM
 To: IHLING, CHRIS G (CHRIS)
 Cc: [EMAIL PROTECTED]
 Subject: Re: Problem with management server communication.
 
 Hi,
 
 Also make sure that both hostnames are accessible from both computers.
 I.e. that you can ping both lpsdev3-n1  lpsdev3-n2 from both lpsdev3-n1
 
  lpsdev3-n2
 
 /Jonas
 
 IHLING, CHRIS G (CHRIS) wrote:
  I have removed the [TCP] section from my config file and I am still
  seeing the same no contact message. Here is the config file I am
  using:
 
  [COMPUTER]
  Id:1
  ByteOrder: Little
  HostName: lpsdev3-n2
 
  [COMPUTER]
  Id:2
  ByteOrder: Little
  HostName: lpsdev3-n1
 
  [MGM]
  Id:1
  ExecuteOnComputer: 1
  PortNumber: 2200
  ArbitrationRank: 1
 
  [DB DEFAULT]
  NoOfReplicas:2
  FileSystemPath: /usr/lps/mysql/data1
 
  [DB]
  Id:2
  ExecuteOnComputer:1
 
  [DB]
  Id:3
  ExecuteOnComputer:2
 
  [API]
  Id:4
  ExecuteOnComputer:1
 
  [API]
  Id:5
  ExecuteOnComputer:2
 
  Is there any way to turn on tracing or check logs to find out more
  info?
 
  -Original Message-
  From: Wheeler, Alex [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, August 17, 2004 10:08 AM
  To: [EMAIL PROTECTED]
  Subject: RE: Problem with management server communication.
 
 
  I've seen this kind of error before, and if I recall correctly it's
  due to a misconfiguration of the TCP information.  So, if you have any
 
  TCP entries in the config.ini file such as [TCP]NodeId1:..., remove
  them as they are automatically configured anyway. Then stop the nodes,
 
  and start again.  Make sure you use the -i when starting the clients.
 
  --
  Alex Wheeler
 
  -Original Message-
  From: IHLING, CHRIS G (CHRIS) [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, August 17, 2004 12:27 PM
  To: '[EMAIL PROTECTED]'
  Subject: Problem with management server communication.
 
  I am having a problem with the management server getting status from a
 
  ndbd started on another machine.
 
 
 
  The ndbd server will start and report status fine if it is running on
  the same machine as the management server.
 
  The ndbd server will start on a different machine but no status is
  reported.
 
  If the management server is not running ndbd will not start so there
  must be some kind of communication just no status.
 
 
 
  Any suggestions on what to try?
 
 
 
 --
 Jonas Oreland, Software Engineer
 MySQL AB, www.mysql.com
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL eats 100% CPU on a quad CPU system...

2004-08-16 Thread Justin Swanhart
Are you swapping?  Check vmstat and iostat

Are your queries optimized, are they doing full table scans?  Enable
the slow query log.


On Mon, 16 Aug 2004 14:48:35 +0200, Fahr o Mavastradioz
[EMAIL PROTECTED] wrote:
 Hello people,
 
 I'm currently configuring a quad CPU system as a standalone MySQL server.
 The machine runs nothing except for MySQL and some trivial things like SSH
 and iptables.
 A second server acts as a webserver and is connected to this one though an
 internal LAN connection.
 Using top or ps, it shows that MySQL eats over 99% of all the available CPU.
 Using slmon, it shows that indeed all 4 CPUs are fully in use. Still, it
 appears to be terribly slow and I don't think MySQL should eat over 99% CPU
 on a quad CPU system...
 The machine runs mysqld 4.0.18 on Debian Sarge. A stripped down my.cnf is
 included below (I merely removed the comments).
 Could anyone tell me what can cause this problem?
 
 Thanks,
 - Fahr
 
 ===my.cnf===
 [client]
 port= 3306
 socket  = /var/run/mysqld/mysqld.sock
 
 [mysqld_safe]
 err-log = /var/log/mysql/mysql.err
 socket  = /var/run/mysqld/mysqld.sock
 open_files_limit = 8192
 
 [mysqld]
 user= mysql
 pid-file= /var/run/mysqld/mysqld.pid
 socket  = /var/run/mysqld/mysqld.sock
 port= 3306
 basedir = /usr
 datadir = /var/lib/mysql
 tmpdir  = /tmp
 language= /usr/share/mysql/english
 skip-locking
 thread_stack= 128K
 skip-innodb
 max_connections = 500
 key_buffer  = 150M
 myisam_sort_buffer_size = 64M
 join_buffer_size  = 1M
 read_buffer_size  = 1M
 sort_buffer_size  = 1M
 table_cache   = 1500
 thread_cache_size = 128
 wait_timeout  = 14400
 connect_timeout   = 10
 max_allowed_packet = 16M
 max_connect_errors = 10
 query_cache_limit  = 1M
 query_cache_size   = 32M
 query_cache_type   = 1
 
 [mysqldump]
 quick
 max_allowed_packet  = 16M
 
 [mysql]
 
 [isamchk]
 key_buffer = 64M
 sort_buffer = 64M
 read_buffer = 16M
 write_buffer = 16M
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



want input on method to avoid replication conflicts

2004-08-10 Thread Justin Swanhart
I have four machines A,B,C,D..

A and B are dual masters while C and D are read only slaves

A --- B
|   |
C D


I have a load balancer that connects all clients to A, and when it
fails, connects them to be.  In practice it is a little more
complicated, because if a server becomes backlogged, it recuses itself
from rotation (but there is logic to prevent a cascade where all
servers recuse themselves).

The problem is, that once a failover has occured, from A - B, then B
has to pretty much stay the master forever until it fails, and A takes
over.  My load balancer doesn't support this kind of logic and manual
intervention would be required to implement it.

I have thought about making the top few bits of the auto increment key
reflect the server id (thanks for the idea Jeremy), but I don't really
want partitioned auto-increment values.

What I am thinking about doing, is modifying the autoincrement values
so that server A always produces an EVEN value, while server B always
produces an ODD value.

That seems like it would nearly eliminate the possiblity of conflicts,
and my writes could be load balanced without any problems.  It won't
fix problems with conflicts of other unique columns, but that seems
like a good thing for the most part at least for my applications.

My healthcheck logic will make sure that server A doesn't appear to be
back up to the load balancer until it has caught up with server B, or
vice versa.

Does this sound like a good idea, or should I be thinking about some
other way to do this?

Justin Swanhart

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What would happen in these two cases?

2004-08-10 Thread Justin Swanhart
You can put multiple renames in one statement, and the entire rename
will be atomic..

I create summary tables from some of my data, and I periodically
refresh them.  When refreshing them I create new tables to replace the
old tables with..

Then I do:
rename current_table to old_table, new_table to current_table

On Tue, 10 Aug 2004 13:50:34 -0700, Haitao Jiang [EMAIL PROTECTED] wrote:
 It would be great if there is a swap table command that is atomic.
 
 Thanks a lot
 
 Haitao
 
 On Tue, 10 Aug 2004 13:23:30 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
  Haitao Jiang [EMAIL PROTECTED] wrote:
 
   Thanks so much for the reply! So, if I understand correctly, to swap
   the table like I described, I should do:
  
   LOCK TABLES A write,A2 write;
   FLUSH TABLES;
   rename table A to A1;
   rename table A2 to A;
   UNLOCK TABLES;
  
   Right? If there is no write to either A or A2, then there is no need
   to lock the table, right?
 
  You can't rename locked tables. RENAME is atomic anyway so you can safely use
  it without lock. But your software should be aware of a possible race condition
  that happens between two RENAME TABLEs.
 
 
 
 
   Thanks!
  
   On Mon, 09 Aug 2004 19:21:39 +0300, Egor Egorov [EMAIL PROTECTED] wrote:
   Haitao Jiang [EMAIL PROTECTED] wrote:
  
Case 1: I have a table A under a running MySQL 4.1.3b server. If I
replace A.MYD and A.MYI files without stop and start the server, will
data gets corrupted?
  
   It depends on many factors. Honestly, there are lots of cases where you will
   get correct data and nothing wrong will happen. But you have to be an
   experience Unix developer to understand Unix internals in order to dance like
   that. :)
  
   So the general answer is: don't, it's too dangerous.
  
Case 2: I rename A to A1, and A2 to A, assume A, A2 have the same
schema but different data/index. Is it bad? Will data gets corrupted?
I tied this one, it seems ok.
  
   Yes, you're right, it just *SEEMS* ok. :)
  
   If you really need to replace table files, use FLUSH TABLES, LOCK TABLES:
  
   http://dev.mysql.com/doc/mysql/en/FLUSH.html
   http://dev.mysql.com/doc/mysql/en/LOCK_TABLES.html
  
   --
   For technical support contracts, goto https://order.mysql.com/?ref=ensita
   This email is sponsored by Ensita.net http://www.ensita.net/
 __  ___ ___   __
/  |/  /_ __/ __/ __ \/ /Egor Egorov
   / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
   /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
 ___/   www.mysql.com
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 
  --
  For technical support contracts, goto https://order.mysql.com/?ref=ensita
  This email is sponsored by Ensita.net http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Egor Egorov
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
  /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
___/   www.mysql.com
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: What would happen in these two cases?

2004-08-10 Thread Justin Swanhart
FYI, the atomicity of rename and using it to swap tables is discussed
in the manual here:

http://dev.mysql.com/doc/mysql/en/RENAME_TABLE.html

Justin

On Tue, 10 Aug 2004 13:50:34 -0700, Haitao Jiang [EMAIL PROTECTED] wrote:
 It would be great if there is a swap table command that is atomic.
 
 Thanks a lot
 
 Haitao


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: want input on method to avoid replication conflicts

2004-08-10 Thread Justin Swanhart
Each server inserts a hearbeat value into a replicated table.  The
server can see when the last heartbeat it received from its master
was.  If a heartbeat has not been received from the master in 3
minutes, and the master is available, then the replication status is
checked with show slave status to see what the status is.  If
replication is running and there are no errors, then I assume a long
operation is taking place (such as create index) and allow both
servers to remain up.  If either of the replication threads have
stopped because of an error, then the server recuses itself and a page
is automatically sent to me.

The biggest problem here is that the servers can't both recuse
themselves.  I have a number of healthchecks and each is assigned a
severity level.  The more severe the problems the less healthy the
server is said to be.  My script basically says if I am more healthy
than my peer, then I will report up, if I am equally healthy as my
peer, if my server_id is  1 then I will report down, if I am less
healthy than my peer, then I will report down

That is the logic for writes.  For reads basically it is if I am not
backlogged, then I am up


On Tue, 10 Aug 2004 17:50:27 -0400, Mayuran Yogarajah
[EMAIL PROTECTED] wrote:
 Justin Swanhart wrote:
 
 Im curious about this part.
 
 My healthcheck logic will make sure that server A doesn't appear to be
 back up to the load balancer until it has caught up with server B, or
 vice versa.
 
 
 
 How do you accomplish this ? We have a similar situation here where a check
 is needed to be done to see if a slave has caught up to a master.  Are
 you checking
 the binary log names/positions or something more ?
 
 thanks,
 Mayuran


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Setting custom information in processlist

2004-08-10 Thread Justin Swanhart
Oracle has a procedure called DBMS_APPLICATION_INFO.SET_APPLICATION_INFO
that allows you to specify up to 64k of addtional information about
the current connection.  It doesn't have any way to specify this
information at connect time though.

The data can be accessed in Oracle through the V$SESSION system view,
or through
userenv('CLIENT_INFO')

Something in mysql that would be similar and just as easy to implement would be:
create table process_info(ThreadID int, Information text, primary key
(ThreadID));

then in each connection do:
replace into process_info values (CONNECTION_ID(), 'Connection details here');

Add in a little cron job that removes old values from the process_info
table nightly and that should do the trick.


On Tue, 10 Aug 2004 15:24:34 -0700, Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Thu, Aug 05, 2004 at 12:36:55PM +0100, Naran Hirani wrote:
  Hi,
 
  I'm using a single shared user-login for a web-based application to
  my mysql database - is there a way of including some information at
  connect time or during processing that would show up when issuing
  `show processlist'?
 
 Only if you prefixed each query with a comment:
 
   /* foo #3 */ SELECTL * FROM world ORDER BY...
 
 But not at connect time.
 
  Basically, I need to able to distinguish potentially multiple
  connections with the same login and process information some how.
  This sort of thing is possible in other SQL engines so probably
  should be in MySQL too.
 
 Interesting.  How do other systems handle this?
 
 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 [book] High Performance MySQL -- http://highperformancemysql.com/
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT WHERE problem

2004-08-09 Thread Justin Swanhart
Because not doing so violates the SQL standard.  Allowing you to
included non aggregated columns in the SELECT list is a non standard
MySQL extension to the SQL language.  You will get an error in other
products, such as oracle, where you will get a xxx is not a group by
expression error.

Information on this feature is here:
http://dev.mysql.com/doc/mysql/en/GROUP-BY-hidden-fields.html

On Mon, 9 Aug 2004 17:22:17 -0600, Ren Fournier [EMAIL PROTECTED] wrote:
 Thanks, the LEFT JOIN worked.
 
 I do have a question though, why is it considered best practice to list
 all non-aggregated columns ( I assume you mean columns from trucks.*)
 in the GROUP BY statement? I ask because I am interested in fast,
 secure, standards-compliant code, I'm just not always sure what that
 is. :-)
 
 Ren
 
 ---
 Ren Fournier,
 www.renefournier.com
 
 
 
 On Aug 9, 2004, at 2:14 PM, [EMAIL PROTECTED] wrote:
 
  You need to change your INNER JOIN to a LEFT JOIN
 
  SELECT
  trucks.id, sum(history.time_sec) as total_seconds
  FROM
  trucks
  LEFT JOIN
  history
  ON trucks.id = history_truckid
  GROUP BY
  trucks.id
  ORDER BY
  total_seconds desc
 
  One other issue ---IMHO, the SQL engine is being too kind when it
  allows
  you to execute a query like SELECT trucks.*  GROUP BY . In
  practically EVERY OTHER SQL-based product you will use, you will be
  required to list _all_ non-aggregated columns in your GROUP BY
  statement
  or you will get an error.  Listing every column you want to group on is
  considered proper SQL format and I highly recommend the practice.
 
  If you still want to see everything from your trucks table (like in
  your
  original query) you can do this:
 
  CREATE TEMPORARY TABLE tmpTruckIDs
  SELECT
  trucks.id, sum(history.time_sec) as total_seconds
  FROM
  trucks
  LEFT JOIN
  history
  ON trucks.id = history_truckid
  GROUP BY
  trucks.id
  ORDER BY
  total_seconds desc;
 
  SELECT trucks*, tmpTruckIDs.total_seconds
  FROM trucks
  INNER JOIN tmpTruckIDs
  ON tmpTruckIDs.id = trucks.id;
 
  DROP TABLE tmpTruckIDs;
 
  HTH,
  Shawn Green
  Database Administrator
  Unimin Corporation - Spruce Pine
 
  Ren Fournier [EMAIL PROTECTED] wrote on 08/09/2004 03:56:58 PM:
 
  I am having a problem building a SELECT statement that joins two
  tables
  with a WHERE condition.
 
 
  SELECT
 trucks.*
 
  FROM
 trucks, history
 
  WHERE
trucks.account_id = '100'
 AND trucks.status = 'Active'
 AND history.truck_id = trucks.id   This is the
  tricky bit
 
  GROUP BY
 trucks.id
 
  ORDER BY
 history.time_sec DESC
 
 
  Simply put (or as simply as I can put it :-) , this SELECT should
  return all trucks in order of their activity (history.time_sec). The
  problem is when a truck is new to the system and does not have a
  single
  record in the history table (and therefore no value for
  history.time_sec). In that case, the truck is excluded from the
  SELECTed rowsbut I want it returned, just at the bottom of the list
  (least active).
 
  Any ideas how this can be done?
 
  ...Rene
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Nodes crash on table creation

2004-08-09 Thread Justin Swanhart
There must be an even number of replicas because the cluster mirrors
data between two machines.  It doesn't do three way mirroring.  An
even number of nodes are required because each two data nodes becomes
a node group.

If you have three machines, you could create two NDB processes on each
machine each each using some of the memory you want to dedicate to
NDB, though this adds some additional overhead in the allocation of
operation records and what not.

Just make sure you don't mirror data to the same machine.  I (think)
they are mirrored in the order they occur so do something like the
following (collapsed from standard INI format for brevity)

datamemory 33M, indexmemory 17M
MGM id 1, machine 1
DB id 2, machine 1
DB id 3, machine 2
DB id 4, machine 3
DB id 5, machine 1
DB id 6, machine 2
DB id 7, machine 3
API id 8, machine 1
API id 9, machine 2
API id 10, machine 3

I haven't actually tested that configuration, so your milage may vary.
 If you do test it, then let me know how well it works for you.


On Mon, 9 Aug 2004 12:07:11 -0400, Wheeler, Alex [EMAIL PROTECTED] wrote:
 After doing further testing, it would appear that the number of nodes must be a 
 power of 2:
 A test of 2 nodes works fine, but 3 fails
 4 works, but 6 fails
 8 works...
 Is this documented and I just missed it?
 
 --
 Alex Wheeler
 
 
 
 
 -Original Message-
 From: Mikael Ronström [mailto:[EMAIL PROTECTED]
 Sent: Thursday, August 05, 2004 1:15 PM
 To: Wheeler, Alex
 Cc: [EMAIL PROTECTED]
 Subject: Re: Nodes crash on table creation
 
 Hi Alex,
 I tried a similar configuration with all nodes on the same computer and
 that worked fine.
 Please provide the trace file in some manner and I'll check it once
 more.
 The other nodes fail since there is only one replica and thus when one
 node fails then
 the cluster fails.
 
 Rgrds Mikael
 
 2004-08-05 kl. 18.34 skrev Alex Wheeler:
 
  I am using the mysql-4.1.4-beta-nightly-20040804 snapshot, compiled
  with
  shared memory support, though I'm not using that feature yet.
 
  I'm using a 3 data node, 4 computer configuration, though I've tried
  several options and the only one that seems to work with this build is
  just 1 computer, here's my config file:
 
  [COMPUTER]
  Id:1
  HostName: 192.168.208.101
 
  [COMPUTER]
  Id:2
  HostName: 192.168.208.100
 
  [COMPUTER]
  Id:3
  HostName: 192.168.208.102
 
  [COMPUTER]
  Id:4
  HostName: 192.168.208.103
 
  [MGM]
  Id:1
  ExecuteOnComputer: 1
  ArbitrationRank: 1
 
  [DB DEFAULT]
  NoOfReplicas: 1
  IndexMemory: 50M
  DataMemory: 100M
  LockPagesInMainMemory: Y
 
  [DB]
  Id:6
  ExecuteOnComputer: 2
  FileSystemPath: /home/awheeler/cluster3/ndb_data6/
 
  [DB]
  Id:7
  ExecuteOnComputer: 3
  FileSystemPath: /home/awheeler/cluster3/ndb_data7/
 
  [DB]
  Id:9
  ExecuteOnComputer: 4
  FileSystemPath: /home/awheeler/cluster3/ndb_data9/
 
  [API]
  Id: 20
  ExecuteOnComputer: 1
  ArbitrationRank: 2
 
 
 
  The nodes come up fine, but when I try to create a table, at least one
  of the nodes will bail out and leave an error log:
  Current byte-offset of file-pointer is: 468
 
 
  Date/Time: Thursday 5 August 2004 - 03:51:33
  Type of error: error
  Message: Internal program error (failed ndbrequire)
  Fault ID: 2341
  Problem data: DbtuxMeta.cpp
  Object of reference: DBTUX (Line: 128) 0x0006
  ProgramName: NDB Kernel
  ProcessID: 3244
  TraceFile: ndb_7_trace.log.1
  ***EOM***
 
  The others exit with Arbitrator decided to shutdown this node
 
  The computers all have different amounts of memory, the smallest being
  512MB.
 
  Let me know if you need more information, and where to send the trace
  file.
 
  Any ideas on what is causing ndbrequire to fail?
 
  --
  MySQL Cluster Mailing List
  For list archives: http://lists.mysql.com/cluster
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 Mikael Ronström, Senior Software Architect
 MySQL AB, www.mysql.com
 
 Clustering:
 http://www.infoworld.com/article/04/04/14/HNmysqlcluster_1.html
 
 http://www.eweek.com/article2/0,1759,1567546,00.asp
 
 --
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: a question/issue...

2004-08-01 Thread Justin Swanhart
[note: This discussion went on off-list, but I thought the info might
be useful in list context, so I am reposting it to the list.   Sorry
about the lack of individual messages.  I wish gmail had an option to
automatically add an address to a reply when you are replying to a
message that has been assigned a label.  I've requested the feature in
fact. ]


I didn't say it can't be done.  I said you have to be _very_ careful.
You can't use the connection in both the parent and the client.  If
you connect in the parent, then don't do anything with that connection
handle.  Let the client do all the work with that connection.  If you
don't do that you have an absolute recipe for disaster.  You need to
have one connection per child.  It simply won't work otherwise.  That
isn't to say you can't create one connection for each child in the
parent process and pass it through fork (more on that below).  Just
don't cross the streams, it will be _very bad_.  Close the connection
after the child ends.  Don't close the connection in the child.

The same logic applies to other file descriptors.  If you pass a
socket to a child process, then you close the child in the parent and
you don't try to do anything else with it.  If you don't close it, you
will eventually run out of file descriptors.  The difference is that
closing a database involves more than just closing a file descriptor.
Database connections, well actually the database access layers, such
as PHP and PERL/DBI, do a lot of background work that is obfuscated
from the user.

This includes automatically closing file handles, database
connections, statement handles, etc.  If you close your handle in the
client, and the client library automatically issues a COMMIT and
closes the connection, then you kill the child and the parent also
tries to close the connection (probably because the connection has
gone out of scope) then you are going to have problems.

This is why I said, when you use Perl/DBI you need to make sure
InactiveDestroy is set correctly in the parent and the child.  This
prevents that background voodoo from going on in DBI and ensures
that you don't run into problems.  You will need to do similar
synchronization in other languages to ensure that your children and
your parent get along.

My philosophy is that you should destroy the connection in the same
thread/process/whatever that created the connection.  This ensures
that resources are released properly.

Now, all that said, I truely believe that the best way to do things is
probably the easiest way.  In general it is much easier, and you are
goign to have less bugs, if you just connect in the child.  If you are
connecting in the parent, because the child doesn't normally know what
database to connect to, then just set a variable with the connection
details (dsn/connect string/etc) and let the child connect on it's own
using that.

On Sun, 1 Aug 2004 12:02:30 -0700, bruce [EMAIL PROTECTED] wrote:

 since you're saying you can't share the connection, could you please tell me
 where in the docs it states you can't, and the reason(s) for not being able
 to 'share'.

 i would believe that once you make the connection, you should be able to
 'use' it between any process that can get 'access' to it.

 so my initial question still stands, how can it be accomplished, or why it
 can't be done.

 i'm inclined to believe that if you can create a db connection handle, then
 any other process should be able to use it, as long as the creating
 process/parent process is still alive/running

 -bruce




 -Original Message-
 From: Justin Swanhart [mailto:[EMAIL PROTECTED]
 Sent: Sunday, August 01, 2004 11:53 AM
 To: [EMAIL PROTECTED]
 Subject: Re: a question/issue...

 In general, it is probably a bad idea to inherit database connections
 from a parent in a fork()'ed child process.

 What is your reasoning behind not permitted the children to make their
 own connection?

 If you can not connect from the child and you must inherit a database
 connection from the parent, make sure you create a new connection for
 each child.  You can't share the same connection between the parent
 and the child.   If you use the connection in the parent do not use it
 in the child and vice versa.

 If you are using perl DBI then you need to set InactiveDestroy where
 it makes sense to do so.  See the DBI manual for details.  You will
 need to take similar measures in other environments.

 ..

 On Sun, 1 Aug 2004 09:22:21 -0700, bruce [EMAIL PROTECTED] wrote:
  hi...
 
  i'm testing an app where i want a parent app to create the mysql db
  connection link/handle, and basically pass this handle off to child
  procesess. is there any reason why this can't be done? are there any
  examples of this already being accomplished that you might provide???
 
  as far as i can tell/see, it should be possible. as long as the parent is
  still running, the connection link/handle should still be valid.
 
  i'm going to ultimately

Re: TOP

2004-07-28 Thread Justin Swanhart
TOP is a microsoft SQL extension.

MySQL uses the LIMIT clause.

for instance, the following is a rather typical  top 10 sql query:

select some_column, sum(another_column) total
  from some_table
group by some_column
order by total desc
LIMIT 10

On Wed, 28 Jul 2004 14:39:11 -0400, Kamal Ahmed
[EMAIL PROTECTED] wrote:
 Hi,
 Does anyone know how to do a TOP function in MySQL ?
 
 Thanks,
 
 -Kamal.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



re: List of associated records

2004-07-25 Thread Justin Swanhart
Create a seperate table called member_interests or something similar

Store one member_id and one interest_id (or whatever you have your PKs
named) in each row.

This is similar to an order entry system, which typically has one
table for order_headers and one for order_detail.  The order_header
table contains things like an order_id, the order_number, the
customer, the selected address, etc..  The order_detail table contains
the items that are on the order.


On Sun, 25 Jul 2004 12:40:09 -0500, Robb Kerr
[EMAIL PROTECTED] wrote:
 I have come across this problem a few times and wondered how other people
 solved the problem.

 Let's say I have a table containing Members. Each Member can choose several
 items in which they are interested. Each of these items represent records
 in a separate table - Interests. How do you store which records from
 Interests the member has checked in their record of the Members table?

 Do you create a TEXT field in the Members table and save a comma-delimited
 string of InterestsIDs?

 Thanx.
 --
 Robb Kerr
 Digital IGUANA
 Helping Digital Artists Achieve their Dreams

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Display field of selected record from full table recordset

2004-07-25 Thread Justin Swanhart
You probably want to pick up a good SQL book.  MySQL by Paul DuBois is
a really good one.

http://www.amazon.com/exec/obidos/tg/detail/-/0735712123/qid=1090786499/sr=8-2/ref=pd_ka_2/102-0741496-3072118?v=glances=booksn=507846

You want to use the WHERE clause of the select statement.

SELECT table.some_column, table.another_column, ...
   FROM table
 WHERE some_column = 'some_value'

see the manual:
http://dev.mysql.com/doc/mysql/en/SELECT.html

On Sat, 24 Jul 2004 13:52:53 -0500, Robb Kerr
[EMAIL PROTECTED] wrote:
 I have a recordset that retrieves the full content of the table - all
 fields, all records. Depending upon the content of different fields in
 different tables, I need to display certain fields of certain records
 within the full recordset. What's the syntax for selecting a particular
 record for display relative to the field contents of another recordset?
 
 Thanx,
 --
 Robb Kerr
 Digital IGUANA
 Helping Digital Artists Achieve their Dreams
 
 http://www.digitaliguana.com
 http://www.cancerreallysucks.org
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: subquery problem.

2004-07-22 Thread Justin Swanhart
Version 4.0 doesn't support subqueries.

In any event, your query would return all rows from
channels as long as there are any rows in users, which
I doubt is what you intended.


--- nambi c [EMAIL PROTECTED] wrote:
 Hi,
 
 My server version :  4.0.18-max-nt
 
 I have created 2 tables 'channels' and 'users' in my
 database. I can query these tables individually. I
 am
 trying to execute a query with 'exists' clause. 
 
 mysql select * from channels where exists (select *
 from users);
 
 This give the following error.
 
 ERROR 1064: You have an error in your SQL syntax. 
 Check the manual that corresp
 onds to your MySQL server version for the right
 syntax
 to use near 'exists (sele
 ct * from users)' at line 1
 
 
 I am getting this syntax error message. The query
 seems perfect to me. Any clue what is happening?
 Help!
 
 -Nambi
 
 
 
   
 __
 Do you Yahoo!?
 Yahoo! Mail is new and improved - Check it out!
 http://promotions.yahoo.com/new_mail
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transactions and mysql insert it

2004-07-22 Thread Justin Swanhart
MySQL doesn't guarantee that there will be no gaps in sequence values.
 Assigment of the id is always atomic because innodb uses an AUTO_INC
lock that lasts for the time of the insert, not the life of the
transaction.

lets say your highest order number is 10

transaction begins for client 1
  insert into orders (...) values (...)
  mysql_insert_id = 11 
  ... user adds stuff to order ...
  aborts order, transaction rolls back

at the same time
transaction begins for client 2
  insert into orders (...) values (...)
  mysql_insert_id = 12
  ... adds some stuff ...
  commits order

there will be a row with an id of 10 and a row with an id of 12 in
your database..  id 11 was rolled back and is gone

this is all documented here:
http://dev.mysql.com/doc/mysql/en/InnoDB_auto-increment_column.html

--- Scott Haneda [EMAIL PROTECTED] wrote:
 I have been pulling my hair out trying to get a
 solution to something,
 assuming idiotically that in a transaction scenario
 I would not be able to
 get the insert it back out.  It seems to work, I am
 wondering how and if it
 is reliable.
 
 Give the scenario where I have 2 inserts I want to
 make, since I can not
 seem to figure out how to make 2 inserts in one
 statement, I will make 2,
 with the condition that the second one needs to know
 the insert id.
 
 I just don't see how mysql can know the insert id in
 a transaction
 situation, I am assumeing that mysql does not
 actually insert anything at
 all untill it sees the COMMIT, so how does it know
 the insert id at all, the
 records have not been inserted yet?  Is this
 reliable?
 
 Here is some pseudo code that shows what I am doing
 and commetns where I am
 confused:
 
 mysqlQuery(tConn, START TRANSACTION);
 
 repeat 1000 times  
 mysqlQuery(tConn, INSERT into zaddress
SET user_id = '123',
address =
 '[EMAIL PROTECTED]');
 
 // How can mysql know about this  
 tID = mysqlInsertID(tConn);
 
 mysqlQuery(tConn, INSERT INTO zblacklist
SET user_id = '123',
id =   tID  , address = 
  tID);
 end repeat;
 
 mysqlQuery(tConn, COMMIT);
 -- 
 -
 Scott HanedaTel:
 415.898.2602
 http://www.newgeo.com   Fax:
 313.557.5052
 [EMAIL PROTECTED]Novato,
 CA U.S.A.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do I import a .dmp file?

2004-07-22 Thread Justin Swanhart
While the extension on the file isn't set in stone, many people add
the .dmp extension to files created with the Oracle exp utility.

You can use a third party utility call nxtract
(http://www.elmbronze.co.uk/nxtract/index.htm) to convert exp files
into tab delimited files.  The eval version only does 5k rows of data
though.

I've never used, nor do I endorse nxtract.  I just found it while
googling around looking for a tool to process oracle exp files.

It might be easier to just connect to the oracle instance (if it still
exists) using perl and extracting the data to insert statements that
way.




On Thu, 22 Jul 2004 15:53:39 -0700 (PDT), David Blomstrom
[EMAIL PROTECTED] wrote:
 --- Victor Pendleton [EMAIL PROTECTED] wrote:
  Are you referring to an Oracle .dmp export file? If
  so you will also need to
  do some parsing.
 
 * * * * * * * * * *
 
 I don't know; this is the first time I've tangled with
 a .dmp file. Someone told me it must have come from an
 Oracle system, because .dmp files are associated with
 Oracle. But I may have been misinformed.
 
 __
 Do you Yahoo!?
 Vote for the stars of Yahoo!'s next ad campaign!
 http://advision.webevents.yahoo.com/yahoo/votelifeengine/
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
I don't see how using a multi value insert would be
any faster than the insert between the tables.  It
would certainly be faster than one insert statement
per row, but I don't think it would be faster than
insert ... select ...

The only reason I suggested an extended syntax insert
earlier was because I wasn't aware that a temporary
table was being loaded first.

Do you ever delete from this table?

Can you post the results from
show variables for us?

Have you removed the unecessary duplicate key on the
first column of your primary key?

Thanks,

Justin



--- gerald_clark [EMAIL PROTECTED]
wrote:
 mysql -i   filename.sql
 
 matt ryan wrote:
 
  Lopez David E-r9374c wrote:
 
  Since you have a temp table created (no keys I
 assume), use the command
 
   mysqldump -v -e -n -t dbname tablename 
 filename.sql
 
 
   
 
  This creates a file that inserts the records back
 into the same table
 
  it also does not do an insert ignore
 
  I need the records to go into the historical
 table, with an insert ignore
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-21 Thread Justin Swanhart
--- matt ryan [EMAIL PROTECTED] wrote:
 Do you ever delete from this table?
 
 Temp table is trunicated before the EBCDIC file is
 loaded
I meant the history table :)

 Have you removed the unecessary duplicate key on
 the first column of your primary key?
 
 Have not touched the DIC index yet, I need a backup
 server to change
 indexes, it would take the main server down for too
 long, and it wont be
 that big an increase, it's only a 3 character index,
 I also do joines on
 that field to other tables, so I was hesitant on
 removing that index.

Even if it is only a 3 character index, you still need
to do disk reads/writes to update the values in the
index.  With a huge table like yours one index can
make a big difference.

It will still work fine with joins.  Since it is the
leading column of another index, it will function just
like a normal index.  Having indexes on a and (a,b) is
redundant for searching/joining only a.  If you need
to join/search on b, then a seperate index is required
for b if you have indexed (a,b).


The last option I can think of requires a lot more
work on your part as far as inserting data, but it may
be your only option at this point.  This is why I
asked if you delete data from the history table, as it
makes deletes/updates more problematic as well.

Split the myisam table into seperate tables.  We will
call each table a bucket.

Create a MERGE table of all of them. For selecting the
data.

When inserting, use a hash function on your primary
key values to determine which bucket to insert into. 
If you almost always select by primary key, then you
can optimize your select queries to only look at the
correct bucket as well, and to only use the merge
table when you aren't selecting by primary key.

This will speed your inserts because instead of 258M
rows to search through for each insert there are only
8M if you use 32 buckets.  The biggest benefit is that
you could also insert in parallel using multiple mysql
connections, because you could calculate the bucket as
an additional column in your temporary table, then do
the inserts for all the buckets the same time.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Cluster on Solaris 9/x86 works

2004-07-20 Thread Justin Swanhart
You do have ByteOrder: Big in the .ini file for the
sparc database servers, right?



--- Alexander Haubold [EMAIL PROTECTED] wrote:
 Hi everyone,
 
 Just to follow up on my previous post regarding
 Cluster on Sparc/Solaris 9: 
 On an x86 Solaris 9 machine that was set up similar
 to the Sparc one, MySQL 
 Cluster (4.1.4) does not produce a Bus Error. Ndbd
 starts up just fine.
 
 I hope that the source will be corrected for the
 Sparc platform, or a note 
 is published on what needs to be done differently to
 compile for Sparc 
 versus x86.
 
 - Alex
 
 
 -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
 Alexander Haubold
 Columbia University - SEAS 2001, 2003
 
 362 Riverside Dr. #5B3
 New York, NY 10025
 
 Phone: +212-853-8239
 Email: [EMAIL PROTECTED]
 WWW: http://www.aquaphoenix.com
 
 
 
 -- 
 MySQL Cluster Mailing List
 For list archives: http://lists.mysql.com/cluster
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
a few suggestions...

Your slow inserts could be a concurrancy issue.  If
lots of users are doing selects that could be
interfering with your inserts, especially if they use
a n odbc/jdbc app that locks the table for the entire
read. Jdbc reads do that when you tell it to stream
the contents of the query, because they wouldn't all
fit in memory.

--

Does your import script do an INSERT for each line, or
does it combine lines into multi-value inserts?

doing an insert into ... values (...),(...),(...) 
will be much faster than doing one insert for each
row.

since your max packet size is pretty big, you should
be able to consolidate a fairly large number of rows
into one insert.

--

What settings are you using on the Percs?  What stripe
size?  What write cache are you using (back or thru)?
Are you using read-ahead cacheing (that can hurt index
performance)?

--

Defragging the filessytem probably won't have any
major impact on the speed of your application.  

---

Is there another way you can approach the duplicate
problem?  For instance, if duplicate data can only be
generated in the last few days worth of imports, you
might not have to rely on your primary key on the main
table for importing.  This only works if you can drop
the primary key because it is only used for duplicate
checking and isn't used to speed queries.  You could
instead create another table that you do all your
importing to, taking care of  dupes with a primary key
on that table, then insert from that table into the
main one.  Keep a timestamp in that table and purge
the older records periodically.


the last thing i could think of would be a dupe
checker table.  Create an innodb table that consists
of only the columns from your big table and make all
the columns the primary key.  Essentially you have
just created an index only table.  Insert your new
data into a temporary heap table, then delete from the
heap table where the key is in your dupe table. 
Then insert everything from the heap table into the
big table.  Once again, this only works if you don't
need the primary key on the big table.  This will use
more CPU/memory but it may get around your read
problems.




Justin

--- matt ryan [EMAIL PROTECTED] wrote:
 
 
  You might be out of luck with MySQL ... sorry.
 
  You may need to switch to a database that has a
 parallel query 
  facility. Then - every query becomes a massive
 table scan but gets 
  divided into multiple concurrent subqueries - and
 overall the job 
  finishes in a reasonable amount of time. The
 epitomy of brute force. 
  It's hard to rationalize initially but after a
 while you see it's the 
  only way to go. Remember -  indexes are no longer
 required.
 
  We have a billion row 100GB table the users search
 any and every way. 
  Response time is less than a minute.
 
  We are anxiously waiting to see this technology
 added to MySQL. Maybe 
  one day we'll have some money to contribute to the
 effort. Parallel 
  query is not trivial. That's why these databases
 are expensive.
 
  I can send you more details privately if you are
 interested.
 
 
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
 
 vs mysql, hit the best one first, and use no other
 
 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
 
 load time is the issue, the server loads files 15
 hours a day, that big 
 primary key makes loading any table over 2-3 gig
 VERY slow
 
 I thought it was a bug, everything was great untill
 you get up to about 
 3 or 4 gig, then it gets SLOW
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
Insert ignore doesn't insert the record if there is a
duplicate.  It simply doesn't insert the row.  Without
the IGNORE clause, the query would generate an error
insert of silenty ignoring the insert.

--- [EMAIL PROTECTED] wrote:
 That's the whole point. Eliminate your indexes and
 your load problems are 
 solved. Especially given the fact that you insert
 ignore and don't use the 
 primary key to validate uniqueness.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-15 Thread Justin Swanhart
 I've used it, with oracle, but oracles index
 searches are better, hit 
 the best one first, then 2nd best, then 3rd, but I
 really dont want to 
 go to oracle, it's too complicated for my tech's
Oracle rarely performs index merges, but it does have
the ability to do, which mysql lacks.

 Query time is a non issue at this point, it's load
 time, load daily file 
 into temp table, then insert ignore into main table,
 on key violation 
 the violating record is ignored
I know you don't want to go with Oracle, but I will
just add that it could help you here too, because it
can do parallel DML.  This is especially useful if you
have access to the partitioning option, because you
could then partition your data by hash and get a
number of bonuses.  #1 your index updates will be much
faster because there are a lot less rows to look
through.  #2 parallel DML can insert into multiple
partitions at once.  

I am unsure if MaxDB supports any of those features,
though it may.  You may want to look into it to see if
they are.

I don't want to sound like I'm pushing Oracle.  I'm
not an Oracle sales rep, or anything like that.  I am
a professional Oracle DBA that happens to also use
mySQL a lot and I like both databases.  Sometimes one
is better than the other for solving a problem.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Search one table, then display another table where keys match ?

2004-07-14 Thread Justin Swanhart
Does access_no contain actual textual data, or is it
simply a key like a category or an integer?  If you
don't need to do a fulltext search against access_no
then there is no reason to include it in your fulltext
index.  You should most likely have a seperate index
for access_no in that case.

The following query assumes access_no is exactly the
same in both balloon_txt and balloon_rec for the rows
you are matching. 

SELECT br.* 
  FROM balloon_txt bt,
   balloon_rec br
   /*find the matching rows from balloon_txt*/
 WHERE MATCH(bt.access_no, bt.recs_txt)
   AGAINST ('robin');
   /*and join them to rows in balloon_rec using
 the access_no column*/
   AND bt.access_no = br.access_no 

if access_no doesn't need to be full text indexed, you
could drop the fulltext key and add a new one just for
recs_txt and remove bt.access_no from the MATCH()

--- leegold [EMAIL PROTECTED] wrote:
 If you would entertain a MYSQL/PHP, hope not too
 off-topicIt's 
 probably not difficult to solve - but you would be
 helping me
 with some SQL logic.
 
 The only way I can think of to explain what I want
 to do
 is to give you my working newbie MSQL/PHP code that
 I'm learning
 MYSQL/PHP with, and at a certain point in the code
 below I'll state
 exactly as I can what I want to try to do. It's
 probably
 quite simple but I can't get it- Thanks:
 
 ...
 pre
 ?php
 $dblink = mysql_connect ( 'localhost',  guest,
 password );
 mysql_select_db( balloon, $dblink );
 // Doing a FULLTEXT search
 // Re the SELECT: I indexed both fields together, so
 seemed like
 // I should put them both in the MATCH...OK, it
 works.
 $query=SELECT * FROM balloon_txt WHERE
 MATCH(access_no, recs_txt)
 AGAINST ('robin');
 $result = MySQL_query($query);
 
 /
  OK, right here - next below I'm gonna display/loop
 $result from table
  balloon_txt. But, what I really want to do is take
 the result set
  access_no fields from the search above and
 (access_no is a Key in all
  my tables) and use it to generate results (ie.
 matching records) from
  another table called balloon_rec and dispaly/loop
 the results from
  balloon_rec. So I'm searching balloon_txt, getting
 results, but I want
  to display matching records from another table -
 balloom_rec. Is there
  a way to do a join or something in the SELECT
 above? Or do I process
  $result? Seems a join in the SELECT above or some
 SQL above is cleaner
  - but not sure how(?) Thanks, Lee G.
 ///
 
 while ( $row = mysql_fetch_row( $result ) ) {
  for ( $i=0; $imysql_num_fields( $result ); $i++ )
   {echo $row[$i] .  ;}
  echo\n\n\n;
 }
 // Close the db connection
 mysql_close ( $dblink );
 ?
 /pre
 ...
 
 
 
   
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: do i need an index for this?

2004-07-14 Thread Justin Swanhart
Creating a key will make that query execute very fast,
but if that is the only reason for the key you are
going to be trading quite a lot of space for the speed
of one query.  

How often are you going to run this query?  If you
have 324 million rows, then that index is going to
consume somewhere in the order of 2G or more of disk
space.  Is it worth using all that space to make one
query faster?

--- J S [EMAIL PROTECTED] wrote:
 Hi,
 
 I want to find the earliest and latest times in the
 time column of my table 
 internet_usage:
 

+--+--+--+-+-+---+
 | Field| Type | Null | Key |
 Default | Extra |

+--+--+--+-+-+---+
 | uid  | int(10) unsigned |  | MUL |
 0   |   |
 | time | timestamp(14)| YES  | |
 NULL|   |
 | ip   | int(10) unsigned |  | |
 0   |   |
 | urlid| int(10) unsigned |  | |
 0   |   |
 | timetaken| smallint(5) unsigned | YES  | |
 0   |   |
 | cs_size  | int(10) unsigned | YES  | |
 0   |   |
 | sc_size  | int(10) unsigned | YES  | |
 0   |   |
 | method_ID| tinyint(3) unsigned  |  | |
 0   |   |
 | action_ID| tinyint(3) unsigned  |  | |
 0   |   |
 | virus_ID | tinyint(3) unsigned  |  | |
 0   |   |
 | useragent_ID | smallint(5) unsigned |  | |
 0   |   |

+--+--+--+-+-+---+
 
 So far there are 324936160 rows. If I do :
 
 SELECT MIN(time) as earliest, MAX(time) as latest
 from internet_usage;
 
 I can see the query is going to run for a long time.
 Do I have to create an 
 index on time to speed this up or is there another
 way of doing it?
 
 Thanks for any help.
 
 JS.
 

_
 It's fast, it's easy and it's free. Get MSN
 Messenger today! 
 http://www.msn.co.uk/messenger
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql growing pains, 4 days to create index on one table!

2004-07-14 Thread Justin Swanhart
Indexes can generate vast amounts of random i/o. 
Because of rotational latency, random i/o can really
slow you down, especially if you are using IDE or SATA
disks because they can't do tagged queueing like SCSI
disks can.

If you have the budget for it, I would consider
getting some solid state disks.  Because they have
extremely low latency you will be able to get full i/o
bandwidth on your reads.  If you can't afford those,
consider adding more disks to your RAID array so that
you can spread the reads over more spindles, which
will help performance.


--- matt ryan [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
 You may want more indexes but you might be getting
 killed because you already have too many.
 
 To test - try loading into a table without indexes
 and see if it makes a difference.
 
 At the very least - check to see if the primary
 index which starts with 'dic' can make your special
 'dic' index superfluous.
 
 If write speed is a bottleneck you might consider
 Raid-1 instead of Raid-5.
 
 Reading lots of rows via index is a killer.
 Depending on your hardware it may be cheaper to
 table scan 50 rows than to read 1 via index.
 However, this requires partitioning of the data
 based on some column which appears in every query
 and acts as an initial filter. If you are lucky
 enough to be in that situation - consider a MERGE
 table.
 
 
   
 
 
 These tables are merged, the total table size is
 huge, on this 
 particular table, it's , 45,449,534 rows, however,
 all the merge tables 
 combined are 258,840,305 records
 
 perhaps I should reorder the pimary key, putting the
 longest most unique 
 record up front, and the least unique at the end,
 would that speed up 
 the key check?   I can tell that almost everything
 is read IO, very 
 little write IO
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication - multiple masters

2004-07-13 Thread Justin Swanhart
Having that many instances on one box is going to be a
management nightmare.  I can only imagine the recovery
scenarios should you have a hardware problem.  Perhaps
you may want to think about writing your metric data
to a local mysql instance then pulling the data from
each instance into the depot database with a script
that runs every few minutes.  Using this pull
methodology you don't have to worry about losing
metrics if your depot is down but you don't have the
burden of up to tweleve databases to manage on one
machine (twenty four databases total).  

Another option would be to write the metrics to a flat
file on the web server, then simply serve that file up
via HTTP.  You then have only one database to manage,
you can pull the metrics into it very easily and you
still don't have to worry about a down depot server.

In your depot you then have a couple options.  You
could have one master table that contains metrics from
all the machines and contains a column for the machine
name or you could have a seperate table for each
machine, and if you want to examine metrics for all
machines you could use a merge table.

--- Marc Knoop [EMAIL PROTECTED] wrote:
 Jeremy Zawodny writes: 
 
  The web servers record web metrics to local mysql
 databases.  I would 
  like those local databases to be consolidated
 onto the DEPOT [as three 
  separate DBs]. 
 
  You cannot do that.
  snip 
  You'd need to run 3 instances of MySQL on DEPOT,
 one for each WWW
  server you'd like to mirror.
 
 Can you, or anyone comment on the praticality of
 doing so?  I estimate 
 10,000 to 30,000 records per web server, per day
 using 3 remote web servers. 
 The number of web servers would not likely grow to
 more than 12. 
 
 My planned DEPOT server is a Dell PowerEdge - dual
 Xeon, 2GB memory and 
 oodles of disk space. 
 
 Could mysql, or Linux ES 3.0 for that matter, handle
 it?  Is there anyone on 
 this list running several instances of mysql on the
 same box?  Any 
 experiences to share? 
 
  --
 ../mk 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why this query doesn't group the email addresses?

2004-07-13 Thread Justin Swanhart
You are doing an implicit group by of first, last as
well as your explicit group by of email.  

So you could have two records with the same e-mail
address generate two records with your group by

   Justin Time [EMAIL PROTECTED]
   Justin Credible [EMAIL PROTECTED]  --DUPE--

Case differences between the records could also cause
dupes.  If case differences are causing it then do

select lower(first), lower(last), lower(email)
...
group by lower(first), lower(last), lower(email)

--- Wesley Furgiuele [EMAIL PROTECTED] wrote:
 What type of field is the email field?
 
 Wes
 
 On Jul 13, 2004, at 11:04 AM, Aaron Wolski wrote:
 
  Hey all,
 
  Got this query:
 
  SELECT first,last,email FROM CustomerTable AS t1,
  OrderTable AS t2, CartTable AS t3 WHERE
 t2.cart_id=t3.cart_id
  AND t1.id=t2.customer_index AND t3.submitted='1'
 AND
  t3.product_index='1' AND t3.quantity0
  GROUP BY t1.email ORDER BY t1.first,t1.last
 
  For some strange reason it doesn't seem to group
 the email addresses.
  I'd be hard pressed to find every occurrence out
 of 1000 records, but I
  DID quickly spot two exact same records which
 means the email address
  was not grouped.
 
  What can I do or where did I go wrong?
 
  Thanks!
 
  Aaron
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why this query doesn't group the email addresses?

2004-07-13 Thread Justin Swanhart
 Where is the implicit group?  The 'order by'
 shouldn't effect how things
 are grouped.  On MySQL 4.0.17:

Wow.  I wasn't aware.  Is that expected behavior? 
Other databases (Oracle) generate an error when you
include columns in the select list that aren't in a
group by, or they do an implicit group by (postgres)
on the columns.  I [wrongly] assumed an implicit group
by was going on because the query hadn't generated an
error.

I guess mysql just takes the values from the first
record that matches the group expression.

That creates confusion when you do:
select first,last,email, count(*)
from foobar
group by email

Because the query will report a count of two (given
your data) when there really is only one row with that
first,last,email combination.

Oracle would require you to do:
select first,last,email, count(*)
from foobar
group by first,last,email
otherwise you would get an error that first is not a
GROUP BY expression.

That query would return four rows on your data, each
with a count of 1.

My apologies,

Justin

 
 create table bar ( email varchar(64), first
 varchar(64), last
 varchar(64) );
 
 insert into bar values ('[EMAIL PROTECTED]', 'a', 'smith'),
 ('[EMAIL PROTECTED]', 'a',
 'williams'), ('[EMAIL PROTECTED]', 'b', 'webb'), ('[EMAIL PROTECTED]',
 'c', 'oconner');
 
 mysql select * from bar;
 +-+---+--+
 | email   | first | last |
 +-+---+--+
 | [EMAIL PROTECTED] | a | smith|
 | [EMAIL PROTECTED] | a | williams |
 | [EMAIL PROTECTED] | b | webb |
 | [EMAIL PROTECTED] | c | oconner  |
 +-+---+--+
 
 mysql select first,last,email from bar b group by
 b.email order by
 b.first, b.last;
 +---+---+-+
 | first | last  | email   |
 +---+---+-+
 | a | smith | [EMAIL PROTECTED] |
 | b | webb  | [EMAIL PROTECTED] |
 +---+---+-+
 
 
  Case differences between the records could also
 cause
  dupes.  If case differences are causing it then do
  
  select lower(first), lower(last), lower(email)
  ...
  group by lower(first), lower(last), lower(email)
 
 Case or extra whitespace is a definite possibility. 
 Aaron, try to find
 at least one occurrence of duplicate email addresses
 and then post a
 small (5 row) dataset that exhibits the problem you
 are having.
 
 Garth
 
  --- Wesley Furgiuele [EMAIL PROTECTED] wrote:
   What type of field is the email field?
   
   Wes
   
   On Jul 13, 2004, at 11:04 AM, Aaron Wolski
 wrote:
   
Hey all,
   
Got this query:
   
SELECT first,last,email FROM CustomerTable AS
 t1,
OrderTable AS t2, CartTable AS t3 WHERE
   t2.cart_id=t3.cart_id
AND t1.id=t2.customer_index AND
 t3.submitted='1'
   AND
t3.product_index='1' AND t3.quantity0
GROUP BY t1.email ORDER BY t1.first,t1.last
   
For some strange reason it doesn't seem to
 group
   the email addresses.
I'd be hard pressed to find every occurrence
 out
   of 1000 records, but I
DID quickly spot two exact same records which
   means the email address
was not grouped.
   
What can I do or where did I go wrong?
   
Thanks!
   
Aaron
   
   
   -- 
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:   
  
 

http://lists.mysql.com/[EMAIL PROTECTED]
   
   
 -- 
 . Garth Webb
 . [EMAIL PROTECTED]
 .
 . shoes * 鞋子 * schoenen * 단화 * chaussures *
 zapatos
 . Schuhe * παπούτσια * pattini * 靴 *
 sapatas * ботинки
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Oracle 2 MySQL updates/replication?

2004-07-13 Thread Justin Swanhart
An option would be a log reader program that uses
Oracle log miner to only show commited transactions
from the redo logs.  You could then replay the SQL
that is being executed on the oracle box on the mysql
server as long as the tables are defined the same.

9i has an enhanced log miner that can be used to read
8i redo logs as well, so you might want to use the
newer 9i client if you go this way.

Updates to the oracle database could be processed the
same way using the mysql binary log as long as no
mysql extensions were used like inserting multiple
rows with a single insert statement.

--- Jeremy Zawodny [EMAIL PROTECTED] wrote:
 On Tue, Jul 13, 2004 at 06:11:22PM -0700, Carl
 Edwards wrote:
  Hello,
  
  I found a question about Oracle 2 MySQL
 replication in the
  archive on Sep. 2001 but no mention since?
  
  We have a  department using Oracle 8.1.7 and I'm
 running MySQL
  4.0 and neither of us wants to change :-)
  
  I could call a Perl, C++ or Java program from cron
 to periodically
  update the MySQL instance from Oracle but was
 hoping to use a
  trigger/stored procedure to initiate the update so
 it seems more
  real time.  Does this seem possible?
  
  Of course it may turn out non-trivial to write the
 synchronization
  code so I'll take suggestions on that front also.
 
 Golden Gate Software makes a product that does this.
  I'd have a look
 at what they offer.
 
 Jeremy
 -- 
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux
 Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 [book] High Performance MySQL --
 http://highperformancemysql.com/
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgrade to mysql 4.0 in Fedora Core 2

2004-07-12 Thread Justin Swanhart
brpm -qa|grep mysql/b will show you what mysql
packages you have installed.

You probably have both 3.x and 4.x packages installed
and assuming you don't have a 3.x database you want to
preserve, I would suggest uninstalling the 3.x package
with brpm --erase iname_of_3.x_package/i/b

To determine where the files in a package are
installed you can query the package via rpm.  To do so
simply do brpm -ql iname_of_package/i/b

You can use rpm -ql to find where the 4.0 rpm
installed its binaries and execute them from there.  


--- A. Lanza [EMAIL PROTECTED] wrote:
 After installing FC2 i realized that the mysql
 server version installed
 was 3.x. I wanted to install version 4.0, so i got
 the rpm
 package from mysql website and i installed it on my
 system.
 
 Now it seems like mysql server is not running. I get
 no output for the
 command ps -A | grep mysql. Also, when i try to
 connect to mysql
 server using mysql client, i get the following error
 message:
 
 ERROR 2002: Can't connect to local MySQL server
 through socket
 '/var/lib/mysql/mysql.sock' (2)
 
 It seems like mysql server does not automatically
 start when i boot up
 my computer... how can i start it manually? Where in
 the filesystem does
 mysql server get installed after installing the rpm?
 
 Though, when i run the Gnome Service config.
 utility, i can see mysql
 service on the list, with a checkmark, and it
 appears to be running...
 i'm so confused about this.
 
 Please help.
 
 Thanks in advance.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Implementing full text searching on an existing, production database.

2004-07-12 Thread Justin Swanhart
Keep in mind that if you create an index on multiple
fields, then all of those fields must be searched at
once.  You can't index product_name, product_desc and
product_category for instance, then only search on the
product_name field using MATCHES.

If you want to bypass this (and many other
limitations, including stoplists, short words, etc)
then I would suggest indexing your data with a
seperate text indexing system like Jakarta Lucene
(http://jakarta.apache.org/lucene/docs/index.html).
Using a product like Lucene will also allow you to
implement your parametric searching MUCH easier.  You
can either define additional parametric fields in you
lucene index or you could create a second one and with
the API very easily merge the searches between the
indexes.  

Keep in mind that a major limitation of the mysql
fulltext engine is that it can't index more than 500
characters which could be a major drawback for your
parametric data.

The following assumes you will stick with mysql
fulltext indexes...

In order to index 3 letter words, you will need to set
min_ft_word_len in your mysql.cnf file.  You probably
also want to create your own list of stopwords and use
ft_stopword_file.

If your table is large then the biggest problem you
are going to have when creating  the index is that the
table will be locked while the index is being created.

If that is a problem then I would suggest that you
create a seperate table with create table as. You
will need some way of keeping track of any additions
to the base table at this point, perhaps using the
highest product_id, or if you have a modification
timestamp on your table use that.
Create the text index on the new table and test it
out.   When you are satisfied that everything is
working ok then rename the new table to the old table,
and insert/update the records that have been
added/modified since you created the copy.

If you can suffer the downtime then simply create the
index on the table and wait it out.

There is very little danger in adding the text index
to a production table.  Just back up your database
before you make the modifications and you should have
no problems whatsoever.

--- Stuart Grimshaw [EMAIL PROTECTED] wrote:
 Hi All,
 
 I'm currently engaged in a project to implement 
 full text searching
 of our product database. The current implementation
 was written for an
 older version of MySQL and doesn't implement BOOLEAN
 MODE.
 
 Currently, the full text search is against a
 de-normalised table from
 fields icluding the products ID, title 
 description, in a simple
 table :
 
 CREATE TABLE text_search
 {
 product_uid int(10) NOT NULL,
 rank int(10) NOT NULL DEFAULT 0,
 txt TEXT
 }
 
 with the full text index set up against txt.
 
 There are several problems with this implementation,
 firstly the
 de-normalised txt field doesn't include all
 information on a
 product, so I would like to implement the search
 against the full
 product table. There are approx 65,000+ products in
 the table, and the
 index would be on 2 or 3 fields in that table. Has
 anyone retro-fitted
 a full text index to a production database? Is there
 an established
 strategy for doing this?
 
 Because of the nature of our business we sell a lot
 of products where
 the keyword is 3 letters, DVD, USB, DDR etc etc. The
 manual mentions
 that while you can reduce the minimum number of
 letters, it's not a
 good idea Modifying the default behavior will, in
 most cases, make
 the search results worse. it says. Is there a
 better way to ensure
 these search terms are included in the index?
 
 Finally, we also carry parimetric data on our
 products, it would be
 good to include this information in the full text
 search. The only way
 I can think of is to create a seperate full text
 index on the table
 storing the parimetric data, and then run the query
 against both
 tables, mergeing the results in code.
 
 -- 
 -S
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: When is mysql 4.1.x , production?

2004-07-12 Thread Justin Swanhart
A beta takes as long as a beta takes.  That is really
the nature of beta testing.  As for an approximate
timeline, I've heard various quotes, but most people
seem to think somewhere late third quarter that the
release will be marked stable.  

4.1.3 is really quite stable and you should have very
few problems with it.  If you are developing a new
product and you need features that are available only
in the 4.1 release, then you are highly encouraged to
test the release.  By doing so you help to move the
beta forward because in the unlikely event that you do
find any problems you can report them and they will
get resolved.  


--- Ben David, Tomer [EMAIL PROTECTED] wrote:
 5-7 more beta releases
 
 How much time is 1 beta release taking
 (approxiamtly)
 
 Thanks :)
 
 Original Message:
 From: Josh Trutwin [EMAIL PROTECTED]
 To: 
 CC: [EMAIL PROTECTED]
 Subject: Re: When is mysql 4.1.x , production?
 Date: Mon Jul 12 16:41:32 GMT 2004
 
 On Mon, 12 Jul 2004 12:05:53 +
 Ben David, Tomer [EMAIL PROTECTED] wrote:
 
  when is mysql 4.1.x going to be released for
 production?
 
 When it is ready I'd guess.  :)
 
 If history is a predictor though I would expect
 4.1.x to go though 5-7 more beta releases though. 
 Help the developers out by testing it in your
 enviornment.
 
 Josh
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Implementing full text searching - more questions

2004-07-12 Thread Justin Swanhart
 Does that mean the max. string that can be indexed
 and therefore searched on is 500 chars? What exactly
 is this limitation?
I may have been wrong on this limit.  I know I read
about it somewhere, but I can't seem to find out where
at the moment.  Since the fulltext index is maintained
as a seperate b-tree with each word from the record
and its local weight, I am nearly certain I was wrong
in making the 500 char limit assertion (though a
single word is limited to that length).  

 Can I just add words or append words to the existing
 default stopword file? After I add words do I have
 to
 reindex the fields or restart anything? (I'm using
 win32)

You can't modify the default stoplist, but you can
create your own using ft_stopword_file (just include
the words on the default stoplist).  If you change the
stopwords you must rebuild your index.

 
 If I insert/add a record to a fulltext table do I
 have
 to redo(reindex) anything? Assuming I'm using a
 recent ver.
 of MYSQL.

Fulltext indexes are maintained just as normal indexes
when you insert/update/delete rows.  They will
(sometimes greatly) affect the speed of your DML
operations, but no special synching is necessary to
keep them up to date with your table data.

 Thanks, just genral questions...thanks.
You are quite welcome.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb filesystem on software raid

2004-07-10 Thread Justin Swanhart
I highly recommend simply using ext3 for your Linux
setup.  The 1 or 2 percent performance benefit that
you may get from raw partitions is way outweighed by
complexness of backups of the raw data.

either way:

First I would suggest you read the Linux RAID howto:
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html

Here are the basic steps:
create a /etc/raidtab file for your array (probably
md0) using a 32k or 64k chunk size
(hint: man raidtab)

run mkraid to initialize the new raid array (md0)
(hint: man mkraid)

if you want to use raw partitions:
-
#this is redhat/fedora specific
add /dev/md0 to /etc/sysconfig/rawdevices
(hint: man raw)

add chown mysql:mysql /dev/raw/raw0 to
/etc/init.d/rc.local 

if you want to use ext3:
-
mke2fs -j -T largefile4 /dev/md0
(hint: man mke2fs)



--- Scott Mueller [EMAIL PROTECTED] wrote:
 I bought a supermicro 6013p-t for the 4 sata raid
 hard drives support.
 Unfortunately, it doesn't really have raid at all. 
 So I'm forced to use
 software raid.  What I'd like to use is fedora core
 2 with an innodb
 filesystem on a software raid partition according to
 these instructions:
 

http://dev.mysql.com/doc/mysql/en/InnoDB_Raw_Devices.html
 
  
 
 Has anybody done this?  I'm not a linux expert and
 so I'm not sure exactly
 how to set this up.  When I try to setup software
 raid in linux, I'm forced
 to pick a filesystem and its mount point as part of
 the process of creating
 a software raid partition.  So this is the part
 that's stumping me.  How do
 I create a software raid raw device only to use as
 an innodb filesystem?  Is
 this possible?  Or maybe this can't be done without
 hardware raid and I need
 to buy a new server?
 
  
 
 GNU's Parted software
 (http://www.gnu.org/software/parted/) has this to
 say
 regarding features, Supported disk labels: raw
 access (useful for RAID and
 LVM), ms-dos, etc.  But I've been unable to create
 a raw software raid
 device with that as well.
 
  
 
 Scott Mueller
 
 AccelerateBiz Managed Hosting
 
 http://www.acceleratebiz.com
 http://www.acceleratebiz.com/ 
 
 Phone: (800) 360-7360
 
 Fax: (270) 778-3081 
 
  
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: SELECT DISTINCT + ORDER BY confusion

2004-07-09 Thread Justin Swanhart
If you are usign 4.1 you could try:

SELECT DISTINCT d, title
FROM 
(select p.id, p.title 
from product p
join e_prod ep on ep.product=p.id
join story s on s.id = ep.story and s.status = 9 and
s.type = 14
where p.platform_id = 5 and p.genre_id = 23282
order by s.post_date desc
)
limit 10

otherwise:

select p.id, p.title
  from product p
  join e_prod ep on ep.product = p.id
  join story s on s.id = ep.story and s.status = 9
   and s.type = 14
 where p.platform_id = 5 
   and p.genre_id = 23282
group by p.id, p.title
order by p.title
limit 10


--- Victor Pendleton [EMAIL PROTECTED] wrote:
 Have you tried using a group by clause? Group by
 title
 
 -Original Message-
 From: news
 To: [EMAIL PROTECTED]
 Sent: 7/9/04 3:08 PM
 Subject: SELECT DISTINCT + ORDER BY confusion
 
 I've got a product  story setup where there can be
 multiple stories of 
 a given type for any product.  I want to find the
 names of the products 
 with the most-recently-posted stories of a certain
 type.  This query 
 works well:
 
 SELECT p.id,p.title
 FROM product p
 join e_prod ep on ep.product=p.id
 join story s on s.id = ep.story and s.status = 9 and
 s.type = 14
 where p.platform_id = 5 and p.genre_id = 23282
 order by s.post_date desc
 limit 10
 

+++
 | id | title
  |

+++
 | 917958 | Port Royale 2
  |
 | 917958 | Port Royale 2
  |
 | 917958 | Port Royale 2
  |
 | 919355 | Warhammer 40,000: Dawn of War
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |
 | 914811 | The Sims 2   
  |
 | 919973 | RollerCoaster Tycoon 3   
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 915040 | Soldiers: Heroes of World War II 
  |

+++
 
 
 however since there are multiple stories of the
 correct type for some of
 
 those products, i would like to dedupe the results
 and just get a unique
 
 list of products.  however, if i use SELECT DISTINCT
 it applies that 
 BEFORE it does the sort, so i don't get only the
 most recent products. 
 what i actually get seems to be pretty random.
 
 SELECT DISTINCT p.id,p.title
 FROM product p
 join e_prod ep on ep.product=p.id
 join story s on s.id = ep.story and s.status = 9 and
 s.type = 14
 where p.platform_id = 5 and p.genre_id = 23282
 order by s.post_date desc
 limit 10
 
 ++---+
 | id | title |
 ++---+
 | 917958 | Port Royale 2 |
 | 920457 | Cuban Missile Crisis  |
 | 915000 | Axis  Allies |
 | 919602 | Blitzkrieg: Burning Horizon   |
 | 914594 | SuperPower 2  |
 | 914911 | Kohan II: Kings of War|
 | 915017 | Sid Meier's Pirates!  |
 | 918842 | Warlords Battlecry III|
 | 919973 | RollerCoaster Tycoon 3|
 | 920314 | Immortal Cities: Children of the Nile |
 ++---+
 
 that's pretty messed up.  really what i'd like is:
 
 

+++
 | id | title
  |

+++
 | 917958 | Port Royale 2
  |
 | 919355 | Warhammer 40,000: Dawn of War
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |
 | 914811 | The Sims 2   
  |
 | 919973 | RollerCoaster Tycoon 3   
  |
 | 915040 | Soldiers: Heroes of World War II 
  |
 | 914468 | Perimeter
  |
 | 915000 | Axis  Allies
  |
 | 914811 | The Sims 2   
  |
 | 918989 | The Lord of the Rings, The Battle for
 Middle-earth |

+++
 (i built this by hand just to demonstrate the
 desired outcome.)
 
 
 is there any way to do a post-order distinct?
 
 -jsd-
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:

http://lists.mysql.com/[EMAIL PROTECTED]
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:

Re: problem upgrading from 4.1.0-alpha to 4.1.3-beta on Solaris 9

2004-07-09 Thread Justin Swanhart
Do you have a hosts.MYD, or a hosts.frm file?

If you do, and there is no .MYI file, perhaps the
older version is just ignoring the table and not
making it available while the newer version errors
out.

If those files exist, try removing them from the data
directory (move them somewhere else) then starting the
new version.  

Hope that helps,

swany



--- John Fink [EMAIL PROTECTED] wrote:
 
 Hey folks,
 
 My mysql-fu is minimal to the point of nonexistent,
 so please forgive any
 vagaries that come across:
 
 I've recently compiled 4.1.3 to replace 4.1.0 on a
 machine here where I
 work.  The compile and install went fine (as far as
 I can tell, anyway),
 but when I try to start mysqld via the init script
 it dies almost
 immediately and I get the following lines in my .err
 file:
 
 040709 13:41:04  mysqld started
 040709 13:41:04  InnoDB: Started; log sequence
 number 0 43912
 040709 13:41:04  Fatal error: Can't open privilege
 tables: Can't find file: 'host.MYI' (errno: 2)
 040709 13:41:04  mysqld ended
 
 Indeed, I have no host.MYI -- I thought perhaps it
 might be a directory
 permissions error or something, but it's not.  4.1.0
 trundled happily
 along without a host.MYI file in my db's mysql
 directory.  Furthermore,
 none of our other machines with MySQL (running
 various 3.23 and 4.0) have
 them either.
 
 Thanks for any help you can provide,
 
 jf.
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: How to query an oracle table from a mysql database

2004-07-07 Thread Justin Swanhart
No, that isn't possible using mySQL.

Try linking PHP with older client libraries (9.0.1,
8.1.5, etc) instead of the newer 9.2 libraries and see
if that fixes your problem with PHP.  You can download
them from otn.oracle.com

swany


--- Alonso, Claudio Fabian
[EMAIL PROTECTED] wrote:
 Hello Steve, hello Victor,
 Thanks for your answers.
 My PHP application needs to see this Oracle table,
 but as far as I could see
 PHP has problems with Oracle 9.2.0. I got a
 conection problem and found in
 the PHP's bug database that it isn't currently
 working.
 As I'm familiar with PHP/MySQL, I'm trying to see
 the Oracle table through
 MySQL.
 That's why I'm thinking on a way to create in MySQL
 a view (or something
 similar) that refers to an external database table
 (in this case, Oracle).
 I don't know if this kind of solution is possible,
 using only MySQL to see a
 remote Oracle table, not including any programming
 language.
 
 --Claudio
 ..
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: moving records between tables?

2004-07-07 Thread Justin Swanhart
LOCK TABLE active_table WRITE, history_table WRITE;

#assuming the columns in the tables are exactly 
#the same
insert into history_table
  select * from active_table;

delete from active_table;

UNLOCK TABLES;


if the columns aren't the same between the tables then
you need to do something like

insert into history_table (colA, colB, colC,...)
  select (col1, col2, col3, ...) from active_table;


Hope that helps,

Swany
--- darren [EMAIL PROTECTED] wrote:
 Hi all,
 
 I have 2 tables...one for keeping active items while
 the other is for older
 records.
 
 A housekeep program will come in every night to move
 some records (matching
 several criteria from the active table to the
 history one.
 
 I am currently doing SELECT, INSERT and then DELETE.
 i.e. a select * from
 active where key=key_value limit 1 and then a
 insert into history... and
 then a delete * from active where
 pri_key='pri_key_value'...
 
 I am sure there's a better way right??
   
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Space is filling up

2004-07-07 Thread Justin Swanhart
--- Asif Iqbal [EMAIL PROTECTED] wrote:
 gerald_clark wrote:
  What about getting a bigger drive?
 
 I guess that would be my only option eh?
 

If any of your data can be considered history data
that is never modified, you could create compressed
myISAM tables for that data, removing it from the inno
tablespaces.  

This could give you a little breathing room until you
can get a bigger drive.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INSERT DISTINCT?

2004-07-07 Thread Justin Swanhart
Create a unique index on each column that you don't
want to be duplicated.

create UNIQUE index table_u1 on table(some_column)

--- John Mistler [EMAIL PROTECTED] wrote:
 Is there a way to do an INSERT on a table only if no
 row already exists with
 the same info for one or more of the columns as the
 row to be inserted?
 That is, without using a method outside SQL?
 
 Thanks,
 
 John
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: selecting rows that match two criteria

2004-07-07 Thread Justin Swanhart
You can do it one of two ways..  Either you can do a
self join like the following:

select t1.userid 
 from answers t1,
  answers t2
where t1.qid = 5
  and lower(t1.answer)='student'
  and t2.qid = 6
  and lower(t2.answer) like 'edu%'
  and t1.userid = t2.userid

or you can use a union (available in 4.0.0 or greater)

select userid 
  from answers
 where qid = 5 and lower(answer) = 'student'
UNION
select userid
  from answers
 where qid = 6 and lower(answer) like 'edu%'

union automatically does a distinct so you will get
one row back for each user that answered that they are
education students.


--- Jonathan Duncan [EMAIL PROTECTED] wrote:
 I am trying to figure out what my select statement
 should be to combine
 both of these into one:
 
 SELECT userid FROM Answers WHERE answer like
 Education%;
 SELECT userid FROM Answers WHERE answer=Student;
 
 Table Answers looks like:
 -id int
 -userid int
 -answer text
 
 for each row there would be something like:
 +-++--+
 | qid | userid | answer   |
 +-++--+
 |  5  |   3000 | Student  |
 +-++--+
 
 There are often multiple asnwers for each question.
 
 I want to select the userid's from the table where
 people answered
 Student in question 5 and Education in question
 6.
 
 I am using MySQL 4.08 so I can't do the subquery
 thing.
 
 Thanks,
 Jonathan Duncan
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT almost every column

2004-05-14 Thread Justin Swanhart
--- John Mistler [EMAIL PROTECTED] wrote:
 Is there a SELECT statement, if I have a table with
 50 columns, to select
 every column EXCEPT the last one?  Can I do this
 without typing the name of
 all 49 columns?
 
 If so, then what if I want to exclude the last TWO
 columns?
 
 Thanks,
 
 John

There is no construct in SQL to select X number of
columns from a table.  

The traditional answer to this question would normally
be use views, but since MySQL doesn't support them
that doesn't help you very much.

Unless the extra columns are long text columns or
contain BLOBS, then I see no harm in just selecting
them along with the rest of the other columns by using
select * from

If you are accessing the database from a programming
environment then you could do the following:

[pseudo code]
$sql = desc $NAME_OF_TABLE
$result = exec($sql)
$rows = fetch_result_into_array($result)
destroy($result)
$cnt = count($rows) - $NUMBER_OF_COLUMNS_TO_OMIT
if ($cnt = 0) 
{  error(to few columns);
   return;
}
$sql = select 
for ($i=0;$i  $cnt-1;$i++)
{ $sql = $sql + $ary[$i][Field] + , 
}
$sql = $sql + $ary[$cnt][Field]

$sql = $sql +  FROM $NAME_OF_TABLE_TO_SELECT_FROM
$sql = $sql +  WHERE $WHERE_CLAUSE
$sql = $sql +  HAVING $HAVING_CLAUSE
$sql = $sql +  GROUP BY $GROUP_BY_CLAUSE
$sql = $sql +  ORDER BY $ORDER_BY_CLAUSE


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]