RE: MySQL Website

2004-04-21 Thread Brad Teale
It appears to be the web server.  I can reach mysql.com just fine in a
traceroute, but can't get a HEAD or webpage to come up!

traceroute to mysql.com (66.35.250.190), 30 hops max, 38 byte packets
 1  cdm-208-180-236-1.cnro.cox-internet.com (208.180.236.1)  18.963 ms
10.260 ms  12.200 ms
 2  cdm-208-180-1-50.cnro.cox-internet.com (208.180.1.50)  7.622 ms  9.933
ms  9.904 ms
 3  cdm-208-180-1-73.cnro.cox-internet.com (208.180.1.73)  17.948 ms  17.666
ms  14.908 ms
 4  dllsbbrc01-gew0402.ma.dl.cox-internet.com (66.76.45.145)  128.870 ms
182.677 ms  91.958 ms
 5  dllsdsrc01-gew0303.rd.dl.cox.net (68.1.206.5)  23.685 ms  26.633 ms
22.810 ms
 6  dllsbbrc01-pos0101.rd.dl.cox.net (68.1.0.144)  23.805 ms  26.595 ms
27.092 ms
 7  12.119.145.125 (12.119.145.125)  79.373 ms  78.874 ms  75.386 ms
 8  gbr6-p30.dlstx.ip.att.net (12.123.17.54)  75.101 ms  79.933 ms  74.823
ms
 9  tbr2-p013701.dlstx.ip.att.net (12.122.12.89)  82.161 ms  80.284 ms
77.678 ms
10  ggr2-p390.dlstx.ip.att.net (12.123.17.85)  78.322 ms  75.077 ms  81.961
ms
11  dcr2-so-4-0-0.Dallas.savvis.net (208.172.139.225)  76.214 ms  77.886 ms
76.674 ms
12  dcr2-loopback.SantaClara.savvis.net (208.172.146.100)  108.356 ms
105.723 ms  112.343 ms
13  bhr1-pos-0-0.SantaClarasc8.savvis.net (208.172.156.198)  95.535 ms
88.560 ms  84.063 ms
14  csr1-ve243.SantaClarasc8.savvis.net (66.35.194.50)  88.678 ms  86.770 ms
85.408 ms
15  66.35.212.174 (66.35.212.174)  89.425 ms  89.129 ms  98.684 ms
16  mysql.com (66.35.250.190)  87.200 ms  85.178 ms  87.600 ms

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]


 -Original Message-
 From: Peter Burden [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, April 21, 2004 7:35 AM
 To: Lehman, Jason (Registrar's Office)
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: MySQL Website
 
 
 Lehman, Jason (Registrar's Office) wrote:
 
 I should have been clearer.  I can't reach the website.  I can get to
 lists.mysql.com with no problem except for the fact that images won't
 pull form www.mysql.com but I definitely come to a grinding 
 halt when I
 try to reach www.mysql.com.  I can't do a tracert because 
 the university
 has shut that off here.  But I guess it is working for everyone else.
   
 
 
 I'm experiencing similar problems - using both Mozilla and IE.
 
 'wget' eventually got the HTML but it took nearly 2 minutes.
 The headers don't suggest anything strange.
 
 
 This is also a University site with 'traceroute' disabled and 
 everything
 accessed through a cache.
 
 www.netcraft.com's site analysis also doesn't suggest 
 anything untoward.
 
 -Original Message-
 From: Rhino [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, April 20, 2004 6:31 PM
 To: Lehman, Jason (Registrar's Office)
 Subject: Re: MySQL Website
 
 
 - Original Message - 
   
 
 From: Lehman, Jason (Registrar's Office) [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, April 20, 2004 11:53 AM
 Subject: MySQL Website
 
 
 
 
   
 
 Does anyone know what is going on with the MySQL website?
 
 
 
 It appears to be undergoing a major redesign.  The sections 
 appear to be
 organized differently and the style sheets have also changed.
 
 Or did you have something else in mind?
 
 Rhino
 
 
 
 
 
   
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: C API

2003-11-12 Thread Brad Teale
There is a C++ package called OTL (http://otl.sourceforge.net/home.htm).
It supports both MySQL through MyODBC, and Oracle.  It works great with
Oracle applications, but we have not used it with MySQL.

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]
713-944-1440 ext. 3623 

Arrange things so that a person needs to know nothing, and you'll end
up with a person who is capable of nothing. -- K. Brown

-Original Message-
From: Priyanka Gupta [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 7:14 PM
To: [EMAIL PROTECTED]
Subject: C API


Is there a way to have a common C API for MySQL and Oracle. I am writing 
some software that I would like to work with both MYSQL or Oracle as the 
backend server?

priyanka

_
Enjoy MSN 8 patented spam control and more with MSN 8 Dial-up Internet 
Service.  Try it FREE for one month!   http://join.msn.com/?page=dept/dialup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Ancestry program

2003-10-28 Thread Brad Teale
It has been a while since I have looked, but I believe the  National
Genealogical Society has a data model for family tree software.  The
following links are to the NGS and GEDCOM is the file format standard.
I think it should be an easy conversion to a database structure.  If you
do something that exports the data, it should probably export in the
GEDCOM format because that is what most software packages will import.

http://www.ngsgenealogy.org/
http://www.gentech.org/ngsgentech/main/Home.asp

GEDCOM seems to be the standard file format:
http://www.gendex.com/gedcom55/55gctoc.htm

Brad

-Original Message-
From: Dan Greene [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2003 12:49 PM
To: Nitin; [EMAIL PROTECTED]
Subject: RE: Ancestry program


well... when I do db design, I tend to start with the objects of my system.
The one that comes to mind in your case is people.  

so you'll need a people table.

well what are the details of a person?
first_name
Last_name
Middle_name1
Middle_name2
Maiden_name
[any other basic bio data]


so you'll need those columns

Well to keep track of each person, each one will need an ID... id's are
usually numbers, so now you add a:
person_id 
field.  This field would likely have an auto_increment attribute to help
number them for you

ok... now that we have people, what else do we need?  relationships between
them well... in terms of human beings, everyone has one biological
mother and one biological father, so we add in

mother_id
father_id

leaving the values of these as null would be equivalent of being 'unknown'

and we now have, data-wise, a system that can trace biological heritage, can
handle siblings and half-siblings.

Other ideas for objects:

Marrages 
- this one would be tricky/interesting, as marrages can change over time,
and people can have multiple marrages (although usually not two at a time,
unless bigamy is allowed in your user's state/country).  Strictly speaking,
marrages are not necessary to trace heritage, but are good info...  




 --From: Nitin [mailto:[EMAIL PROTECTED]
 --Sent: Monday, October 27, 2003 10:46 PM
 --To: [EMAIL PROTECTED]
 --Subject: Ancestry program
 --
 --Hi all,
 --
 --I'm developing a web based ancestry program. The user 
 wants it to be
 --static, that means, it isn't for all to use, but his family. Better
 to
 --say, it'll contain only his family tree.
 --
 --Now, I cant think of the proper db design, which will help any user
 to
 --find his or her relationship with any other person in the tree.
 Though, I
 --can design a simple database, where everything will have to be done
 --through queries and scripts, but I want those queries to keep as
 simple
 --as possible.
 --
 --Any help will be appreciated, as I'm new to such a problem.
 --
 --Thanx in advance
 --Nitin
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: C API

2003-10-21 Thread Brad Teale
There is a C++ package called OTL (http://otl.sourceforge.net/home.htm).
It supports both MySQL through MyODBC, and Oracle.  It works great with
Oracle applications, but we have not used it with MySQL.

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]
713-944-1440 ext. 3623 

Arrange things so that a person needs to know nothing, and you'll end
up with a person who is capable of nothing. -- K. Brown

-Original Message-
From: Priyanka Gupta [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2003 7:14 PM
To: [EMAIL PROTECTED]
Subject: C API


Is there a way to have a common C API for MySQL and Oracle. I am writing 
some software that I would like to work with both MYSQL or Oracle as the 
backend server?

priyanka

_
Enjoy MSN 8 patented spam control and more with MSN 8 Dial-up Internet 
Service.  Try it FREE for one month!   http://join.msn.com/?page=dept/dialup


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Adding indexes on large tables

2003-10-07 Thread Brad Teale
Brendan,
  We have used ext2, ext3, and reiser for testing purposes, and we have
found ext3 to be terribly slow on file read/write operations.  If you need 
a journaling file system, I would go with reiser, otherwise ext2 will be 
blazingly fast.

The other thing I would do is move your DB to another drive like Dan, said.

Brad

-Original Message-
From: Brendan J Sherar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 07, 2003 6:27 AM
To: [EMAIL PROTECTED]
Subject: Adding indexes on large tables


Greetings to all, and thanks for the excellent resource!

I have a question regarding indexing large tables (150M+ rows, 2.6G).

The tables in question have a format like this:

word_id mediumint unsigned
doc_id mediumint unsigned

Our indexes are as follows:

PRIMARY KEY (word_id, doc_id)
INDEX (doc_id)

The heart of the question is this:

When calling ALTER IGNORE TABLE doc_word ADD PRIMARY KEY(doc_id, word_id),
ADD INDEX(doc_id), MySQL proceeds to create a working copy of the table.
This
process takes over an hour to perform. During this time, disk I/O for the
rest of the database (live) reaches a bottleneck, and slows to an
unacceptable crawl. Once the copy has been created, MySQL is able to do
the actual index build very quickly and efficiently. This process must
occur three times daily.

A) MySQL creates these temporary tables in the same directory as the
original datafile. Is there a way to cause it to use an alternate
directory (i.e., on a separate mounted disk)?

B) Is there a way to nice this process in such a way that the amount of
I/O it consumes in performing the copy is restricted to a manageable level
so that other requests to the disks can be served in a timely fashion?

C) Would abandoning ext3 in favor of ext2 create a substantial difference?

D) We're reluctant to upgrade to 4.0 at this point, but were we do so, are
there any significant gains in this situation?

E) The ALTER TABLE query is performed using perl DBI. Is there a lower
level call available which would improve performance?

F) Any other ideas or suggestions?

The system in question has the following setup:

Dual Xeon 2.8, 4G RAM, 2 x 146GB U160 SCSI (10,000 RPM) on RAID 1
(hardware). Redhat 8.0, 2.4.18 kernel, using ext3 fs. MySQL 3.23.56, with
myisam tables.

Relevant variables:

myisam_sort_buffer_size=512M
tmp_table_size=128M
This is a master, so bin_log is on

Thanks in advance for your help, and please keep up the excellent work!

Best,
Brendan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: grant by option on querys

2003-10-03 Thread Brad Teale
Thank you very much Paul.  The order by NULL clause sped the query up
from 1.5 minutes to 10 seconds!  This is what we were looking for.

Thanks,
Brad

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 02, 2003 9:59 PM
To: Brad Teale; '[EMAIL PROTECTED]'
Subject: Re: grant by option on querys


At 21:07 -0500 10/2/03, Brad Teale wrote:
Hi All,

I asked earlier about a query being slow, possibly due to MySQL 'Using
temporary; Using filesort' when processing the query.  I have done some
testing, and it appears that no matter what data set is used, MySQL always
performs a select with a 'grant by' clause using the temporary and filesort
methods.  The only time I could force MySQL into not using these methods
happened when a did a goup by on a column that absolutely contained the
same
information.  Is this the standard behavior?  Is there anyway to get around
this?  Is there a MySQL variable I can tweak?

Try adding ORDER BY NULL to suppress the implicit sorting that GROUP BY does
in MySQL.

Of course, that means your results won't be sorted.  If you really want
them sorted, you might try indexing modelhr, the column you're grouping
by.  You might try indexing it anyway, in fact.  That may give you quicker
grouping.



My example:
mysql desc foo;
+--+--+--+-++---+
| Field| Type | Null | Key | Default| Extra |
+--+--+--+-++---+
| stn  | char(4)  | YES  | MUL | NULL   |   |
| modelhr  | int(2)   | YES  | | NULL   |   |
| f_temp   | decimal(6,2) | YES  | | NULL   |   |
| m_temp   | decimal(6,2) | YES  | | NULL   |   |
| yearmoda | date |  | | -00-00 |   |
+--+--+--+-++---+
5 rows in set (0.00 sec)

mysql select * from foo;
+--+-++++
| stn  | modelhr | f_temp | m_temp | yearmoda   |
+--+-++++
| KHOU |   6 |  90.00 |  89.60 | 2003-06-01 |
| KHOU |   6 |  76.00 |  71.60 | 2003-06-01 |
| KHOU |   6 |  75.00 |  73.40 | 2003-06-01 |
| KHOU |   6 |  88.00 |  87.80 | 2003-06-01 |
+--+-++++
4 rows in set (0.01 sec)

mysql explain select stn, modelhr, m_temp from foo group by modelhr;
+---+--+---+--+-+--+--+
-
+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+
-
+
| foo   | ALL  | NULL  | NULL |NULL | NULL |  120 | Using
temporary; Using filesort |
+---+--+---+--+-+--+--+
-
+
1 row in set (0.01 sec)

mysql explain select stn, modelhr, m_temp from foo where stn='KHOU' and
yearmoda = '2003-06-02' group by modelhr;
+---+--+---+--+-+--+--+
-
-+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+
-
-+
| foo   | ALL  | stn,stn_2 | NULL |NULL | NULL |   90 | Using
where;
Using temporary; Using filesort |
+---+--+---+--+-+--+--+
-
-+
1 row in set (0.05 sec)


-- 
Paul DuBois, Senior Technical Writer
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

Are you MySQL certified?  http://www.mysql.com/certification/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query speed issue

2003-10-02 Thread Brad Teale
Hello, 

The problem:
I have the following query with is taking upwards of 2 minutes to complete
and we need it faster, prefer less than 30 seconds (don't laugh):
select modelhr, avg(f.temp-b.temp), avg(abs(f.temp-b.temp)),
stddev(f.temp-b.temp), stddev(abs(f.temp-b.temp)), count(f.temp-b.temp) from
foo as f, bar as b where f.fyearmoda=b.yearmoda and f.fhr=b.hr and
f.stn=b.stn and b.yearmoda = '2003-01-01' and b.yearmoda = '2003-01-31'
and b.stn='' group by modelhr;

When we run explain we get:
+---+---+---+-+-+---
++
| table | type  | possible_keys | key | key_len | ref
| rows | Extra|
+---+---+---+-+-+---
++
| b | range | PRIMARY,interp_hr | PRIMARY |   7 | NULL
|  679 | Using where; Using temporary; Using filesort |
| f | ref   | stn,fcst  | stn |  11 |
const,m.yearmoda,m.Hr |   26 | Using where
|
+---+---+---+-+-+---
+--+--+

Is there a reasonable way to get this query from using temporary and
filesort?  I tried dumping the data into a temporary table, and the explain
ran the same.  Also, both MySQL setups perform the same.  Any
ideasPlease!

 System/Table Stuff Below
-
System: dual Xeon 2.4GHz machine with 2G RAM
Interconnect: QLogicFC 2200
Disk1: Sun T3 hardware raid5 with Reiserfs (64M cache controller)
Disk2: Sun T3 hardware raid0 with Reiserfs (64M cache controller)
OS: Red Hat Linux release 8.0 (with qlogicfc module)
MySQL1: 4.0.14 - prebuilt MySQL rpm uses Disk1
MySQL2: 4.0.15a - Hand built with MySQL Manual options uses Disk2

The table structures are as follows:
CREATE TABLE foo (
  yearmoda date NOT NULL default '-00-00',
  mruntime int(2) NOT NULL default '0',
  mhr int(3) NOT NULL default '0',
  fyearmoda date NOT NULL default '-00-00',
  fhr int(2) NOT NULL default '0',
  stn varchar(4) NOT NULL default '',
  temp decimal(6,2) default NULL,
... more but unused data here
  PRIMARY KEY  (yearmoda,mruntime,mhr,stn),
  KEY stn (stn,fyearmoda,fhr),
  KEY fcst (stn,yearmoda,mruntime)
) TYPE=MyISAM;

CREATE TABLE bar (
  stn char(4) NOT NULL default '',
  hr int(2) NOT NULL default '0',
  min int(2) NOT NULL default '0',
  day int(2) NOT NULL default '0',
  temp decimal(6,2) NOT NULL default '0.00',
... More unused data here
  yearmoda date NOT NULL default '-00-00',
  PRIMARY KEY  (stn,yearmoda,hr,min),
  KEY interp_hr (yearmoda,hr,stn)
) TYPE=MyISAM;

Table Stats:
foo - 38G - Data/18G - Index (326K rows)
bar - 24G - Data/14G - Index (35K rows)

  
Thanks,
Brad

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



grant by option on querys

2003-10-02 Thread Brad Teale
Hi All,

I asked earlier about a query being slow, possibly due to MySQL 'Using
temporary; Using filesort' when processing the query.  I have done some
testing, and it appears that no matter what data set is used, MySQL always
performs a select with a 'grant by' clause using the temporary and filesort
methods.  The only time I could force MySQL into not using these methods
happened when a did a goup by on a column that absolutely contained the same
information.  Is this the standard behavior?  Is there anyway to get around
this?  Is there a MySQL variable I can tweak?


My example:
mysql desc foo;
+--+--+--+-++---+
| Field| Type | Null | Key | Default| Extra |
+--+--+--+-++---+
| stn  | char(4)  | YES  | MUL | NULL   |   |
| modelhr  | int(2)   | YES  | | NULL   |   |
| f_temp   | decimal(6,2) | YES  | | NULL   |   |
| m_temp   | decimal(6,2) | YES  | | NULL   |   |
| yearmoda | date |  | | -00-00 |   |
+--+--+--+-++---+
5 rows in set (0.00 sec)

mysql select * from foo;
+--+-++++
| stn  | modelhr | f_temp | m_temp | yearmoda   |
+--+-++++
| KHOU |   6 |  90.00 |  89.60 | 2003-06-01 |
| KHOU |   6 |  76.00 |  71.60 | 2003-06-01 |
| KHOU |   6 |  75.00 |  73.40 | 2003-06-01 |
| KHOU |   6 |  88.00 |  87.80 | 2003-06-01 |
+--+-++++
4 rows in set (0.01 sec)

mysql explain select stn, modelhr, m_temp from foo group by modelhr;
+---+--+---+--+-+--+--+-
+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+-
+
| foo   | ALL  | NULL  | NULL |NULL | NULL |  120 | Using
temporary; Using filesort |
+---+--+---+--+-+--+--+-
+
1 row in set (0.01 sec)

mysql explain select stn, modelhr, m_temp from foo where stn='KHOU' and
yearmoda = '2003-06-02' group by modelhr;
+---+--+---+--+-+--+--+-
-+
| table | type | possible_keys | key  | key_len | ref  | rows | Extra
|
+---+--+---+--+-+--+--+-
-+
| foo   | ALL  | stn,stn_2 | NULL |NULL | NULL |   90 | Using where;
Using temporary; Using filesort |
+---+--+---+--+-+--+--+-
-+
1 row in set (0.05 sec)


Thanks,
Brad

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Real-time data warehousing

2002-05-17 Thread Brad Teale

We are warehousing real-time data.  The data is received at up to T1 speeds,
and is broken up and stored into the database in approximately 25 different
tables.  Currently MySQL is doing terrific, we are using MyISAM tables and
are storing 24 hours worth of data but we don't have any users and we need
to store 72 hours worth of data.

Our concern is that when we start letting our users (up to 200 simultaneous)
hit the database, we won't be able to keep up with ingesting and serving
data with the MyISAM locking scheme.

We have tested Oracle and PostgreSQL which fell behind on the ingest.  The
current production system uses regular ISAM files, but we need to make a
certification which requires a relational database.  Also, the current
production system doesn't have the feature list the new system has.

Is there a better database solution or do you think MySQL can handle it?
If MySQL can handle it, would we be better off using InnoDB or MyISAM
tables?

Thanks,
Brad

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Real-time data warehousing

2002-05-17 Thread Brad Teale

I forgot to mention, we have Oracle in-house, and the machine the MySQL
database will reside on is a 2 proc Sun box with 1.5G of RAM.  The Oracle
databases reside on a 16 proc Sun box with 10G of RAM.

The decision to go or not go with MySQL is not based on money, it needs to
be based on performance.  We currently use Oracle in-house for everything,
but its speed hasn't been its selling point, and for this application we
need lots of speed.  That is why we are leaning toward MySQL, but were not
sure if it could keep up with the addition of the user community.

I had one other question, how much of a performance hit would we take with
MySQL if we connected through MyODBC?

Thanks again,
Brad

-Original Message-
From: walt [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 11:47 AM
To: Brad Teale
Cc: '[EMAIL PROTECTED]'
Subject: Re: Real-time data warehousing

Brad,
 We're in process of  evaluating mysql vs our current Oracle 8 system.
Importing
data is much faster
in mysql than oracle according the numbers we're getting. However, from our
benchmarking, Oracle seems to be faster on the queries (no writes to db
during
query time). The table were running our queries against
has 46 coulmns and 14 indexes (some columns indexed twice in multi-column
indexes). All queries are based on indexed columns. We've also run into some
issues trying to delete indexes, 14+ hours before we killed the db and
reloaded
data, but I may be something stupid.

One note on Oracle, $30,000+ for a single processor licence. From our
testing, it
looks like the bottleneck is disk I/O not processing power.  With Oracle,
you
have better control over which disks your data resides on which lets you
balance
disk I/O better.  However, for $30k, you can buy 10  15,000 rpm drives,
stripe
them, and then buy another server for replication of data and still have
$25K
left over.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Real-time data warehousing

2002-05-17 Thread Brad Teale

We have used the predecessor to the OTL for many of our apps and were
planning to use the OTL for the new system.  I thought the OTL used ODBC to
make its connection with databases other than Oracle.  I know the OTL
supports Oracle natively.

Sadly we cannot move to Linux.  We managed to get our web servers on Linux,
but the big iron will always be Sun here (Company policy).  There has been
talk of getting Oracle 9i? because Oracle has told us it is much faster, but
we are not holding our breath.

Thanks,
Brad Teale

-Original Message-
From: walt [mailto:[EMAIL PROTECTED]]
Sent: Friday, May 17, 2002 12:27 PM
To: Brad Teale
Cc: '[EMAIL PROTECTED]'
Subject: Re: Real-time data warehousing

How are your apps written?  We use OTL libaries from 
http://members.fortunecity.com/skuchin/home.htm
which are compiled into our C/C++ code. Moving our apps from oracle to mysql

only requires changing 3 or 4 lines per call to the db in the code. Its not 
odbc compliant, but still allows our apps to be farily portable and
fast. 
We debated rewriting our apps to be ODBC compiant, but figured that was one 
more layer for bugs and we'd have to switch db platforms 4 times for it to
be 
cost effective.

Have you tried Oracle on Linux? We did some testing before Oracle told us
the 
cost of migrating our licence from Oracle8/Solaris to Oracle8i/Linux. We 
benchmarked our current db server, Sun Ultra single processor 768MB ram, 
against a 600Mhz 500MB ram Intel/Linux box. The Linux./8i/Intel box smoked 
our current db server.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Compiling on Solaris

2002-01-17 Thread Brad Teale

A couple of questions about compiling on Solaris.

1) Are the Sun Workshop 6 compilers supported for MySQL and MySQL++?
   1a) Can you use the -native flag without problems?

2) Is the binary distribution compiled with Sun or GNU compilers?

Background Info:
  We are currently trying to ingest 1.5M/sec of weather data into a
database, and we have had luck using MySQL 3.23.4x on a Linux 800Mhz machine
with 256M of RAM.  However, this machine is basically useless for anything
else, and it is my desktop.  We have several Sun servers with 4+ procs and
4+Gb of RAM, and I thought one would make a good ingest machine.  So I would
like to compile MySQL and MySQL++ with the Sun compilers to take full
advantage of everything the platform has to offer.

Any help would be great.

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Compiling on Solaris

2002-01-17 Thread Brad Teale

I found the answers to my previous question about MySQL in the manual. Doh!

However, when I tried to compile MySQL, I ran into the following error:

/bin/sh ../libtool --mode=compile cc
-DDEFAULT_CHARSET_HOME=\/export/home/bteale/mysql-3.23.47\
-DDATADIR=\/export/home/bteale/mysql-3.23.47/var\
-DSHAREDIR=\/export/home/bteale/mysql-3.23.47/share/mysql\
-DUNDEF_THREADS_HACK -DDONT_USE_RAID  -I./../include -I../include
-I./.. -I.. -I..-O -DDBUG_OFF -Xa -fast -native -xstrconst -mt
-DHAVE_CURSES_H -I/export/home/bteale/pkgs/mysql-3.23.47/include
-DHAVE_RWLOCK_T -c hash.c
rm -f .libs/hash.lo
cc -DDEFAULT_CHARSET_HOME=\/export/home/bteale/mysql-3.23.47\
-DDATADIR=\/export/home/bteale/mysql-3.23.47/var\
-DSHAREDIR=\/export/home/bteale/mysql-3.23.47/share/mysql\
-DUNDEF_THREADS_HACK -DDONT_USE_RAID -I./../include -I../include -I./.. -I..
-I.. -O -DDBUG_OFF -Xa -fast -native -xstrconst -mt -DHAVE_CURSES_H
-I/export/home/bteale/pkgs/mysql-3.23.47/include -DHAVE_RWLOCK_T -c hash.c
-KPIC -DPIC
cc: Warning: -xarch=native has been explicitly specified, or implicitly
specified by a macro option, -xarch=native on this architecture implies
-xarch=v8plusa which generates code that does not run on pre UltraSPARC
processors
hash.c, line 189: reference to static variable hash_key in inline extern
function
hash.c, line 229: cannot recover from previous errors
cc: acomp failed for hash.c
make[2]: *** [hash.lo] Error 1
make[2]: Leaving directory `/export/home/bteale/pkgs/mysql-3.23.47/libmysql'
make[1]: *** [all-recursive] Error 1

I looked into hash.c, and the relevant code is as follows:

183: #ifndef _FORTREC_
184: inline
185: #endif
186: uint rec_hashnr(HASH *hash,const byte *record)
187: {
188:   uint length;
189:   byte *key=hash_key(hash,record,length,0);
190:   return (*hash-calc_hashnr)(key,length);
191: }

To fix the problem I poked around, and ended up commenting out lines
183-185.  After this it compiles fine.  Will this lead to any problems?

Computer Config:
SunOS 5.8 sun4u sparc SUNW, Ultra-80
Sun Workshop 6 update 2 5.3 2001/05/15

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




AUTO_INCREMENT question...

2001-06-20 Thread Brad Teale

I am currently using MySQL to warehouse real-time data, and I have a couple
of questions regarding AUTO_INCREMENT columns.

OS: Linux/Solaris
MySQL version: 3.23.33
Table Types: MYISAM


1) The data is only stored for 24hrs.  If I do continuous deletes from the
tables, will the AUTO_INCREMENT columns reuse the deleted numbers?

2) I was wondering if the AUTO_INCREMENT columns wrapped back to 0 once they
run out of numbers on the top end?

3) If neither of these cases are true, is there a way to simulate number 2?

Thanks,
Brad Teale
Universal Weather and Aviation, Inc.
mailto:[EMAIL PROTECTED]
713-944-1440 ext. 3623 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php