split a datafile

2006-02-06 Thread wangxu
I want to split a datafile to two datafiles.How can i do?


Two transactions cannot have the AUTO-INC lock on the same table simultaneously ... what happened if it happened

2006-02-06 Thread Ady Wicaksono

From MySQL 5.0.18 manual
==

When accessing the auto-increment counter, InnoDB uses a special table 
level AUTO-INC lock that it keeps to the end of the current SQL 
statement, not to the end of the transaction. The special lock release 
strategy was introduced to improve concurrency for inserts into a table 
containing an AUTO_INCREMENT column. Two transactions cannot have the 
AUTO-INC lock on the same table simultaneously.



What happened if it exist?... Any idea?

---TRANSACTION 0 461360628, ACTIVE 19 sec, process no 734, OS thread id 
3136353728 setting auto-inc lock

mysql tables in use 1, locked 1
LOCK WAIT 1 lock struct(s), heap size 320
MySQL thread id 28241, query id 13012404 10.1.30.70 root update
INSERT INTO sms_9388_telkomsel.t_outgoing_sms (out_sms_time, 
in_sms_time, out_sms_pin_quiz, out_sms_trx_id, in_sms_message_id
, out_sms_dest, out_sms_typePremium, out_sms_msg, 
out_sms_typeService_telkomsel,out_sms_quiz_keycode) values (NOW(),NOW(),'92
66715','9266715','9266715','6281356059825','TELKOMSEL_LOVE_2000','LOVE: 
Setelah berkencan, jangan lupa untuk menghubunginya k
eesokan hari dan katakan betapa mengesankan pertemuan kemarin dan 
berharap bertemu lagi.','2000','LOVE')

--- TRX HAS BEEN WAITING 19 SEC FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `sms_9388_telkomsel/t_outgoing_sms` trx id 0 461360628 
lock mode AUTO-INC waiting

--
---TRANSACTION 0 461360624, ACTIVE 22 sec, process no 734, OS thread id 
3133603008 setting auto-inc lock

mysql tables in use 1, locked 1
LOCK WAIT 1 lock struct(s), heap size 320
MySQL thread id 28227, query id 13012396 10.1.30.70 root update
INSERT INTO sms_9388_telkomsel.t_outgoing_sms (out_sms_time, 
in_sms_time, out_sms_pin_quiz, out_sms_trx_id, in_sms_message_id
, out_sms_dest, out_sms_typePremium, out_sms_msg, 
out_sms_typeService_telkomsel,out_sms_quiz_keycode) values (NOW(),NOW(),'92
66743','9266743','9266743','6281375092919','TELKOMSEL_LOVE_2000','LOVE: 
Setelah berkencan, jangan lupa untuk menghubunginya k
eesokan hari dan katakan betapa mengesankan pertemuan kemarin dan 
berharap bertemu lagi.','2000','LOVE')

--- TRX HAS BEEN WAITING 22 SEC FOR THIS LOCK TO BE GRANTED:
TABLE LOCK table `sms_9388_telkomsel/t_outgoing_sms` trx id 0 461360624 
lock mode AUTO-INC waiting



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with query optimization & query SUM

2006-02-06 Thread سيد هادی راستگوی حقی
Dear Reynier,

You can use JOIN on your both,
The JOIN have to run on the same feilds i.e IDA.

SELECT * FROM carro_de_compras LEFT JOIN os_articulo ON carro_de_compras.IDA
= os_articulo.IDA

This query returns all your users with their articles if any and you can
iterate on it.

but one note:
Use INDEX on both tables. You may encounter problems when your rows grow up.

about the UPDATE query:

UPDATE table SET value=value+1" WHERE id='1'

is enough, use that.


On 2/7/06, Reynier Perez Mira <[EMAIL PROTECTED]> wrote:
>
> Hi:
> I'm develop a simple shopping cart. I have this two tables:
> carro_de_compras
> --
> IDU int(11) NOT NULL
> IDA int(11) NOT NULL
> CantidadDeArticulos int(11) NOT NULL
>
> os_articulo
> --
> IDA int(11) NOT NULL auto_increment,
> IDC int(11) NOT NULL default '0',
> ANombre varchar(200) NOT NULL default '',
>
ADescripcion text,

ACantidad int(11) NOT NULL default '0',

AImagen varchar(50) default NULL,

IDU int(11) NOT NULL default '0',

APrecio float(6,2) default NULL,

KEY AI_IDA (`IDA`)
>
> Before ask let me explain some things. As you can see in the tables I have
> the same field IDU in both tables. So in first(table carro_de_compras) it
> means is user ID loged on ecommerce system, the second is the user ID who
> upload articles for sale. Something like eBay in wich you can sale and buy
> at every time. The arrive the point in wich I need to optimize queries:
>
> PHP Code:
> -
> $sql = mysql_query("SELECT * FROM carro_de_compras");
> $sresultado = mysql_fetch_assoc($sql);
>
> $query = mysql_query("SELECT * FROM os_articulo WHERE
> (IDA='".$sresultado['IDA']."')");
> while ($record = mysql_fetch_assoc($query)) {
> $productos[] = $record;
> }
>
> The question for this problem is: exists any way to optimize this query
> and leave only in one line? I read in MySQL doc about it and found some
> about JOIN but I can't understand how it works. Maybe because I'm cuban and
> not understand english as well as I want.
>
> The other questions is how to add some values to a field. For example:
> $sql = mysql_query("UPDATE table SET value=value+1" WHERE id='1');
>
> For do this query I do this:
> $sql = mysql_query("SELECT value FROM table WHERE id='1'");
> $result = mysql_query($sql);
> $update = mysql_query("UPDATE table SET (value='".$result['value']."' + 1)
> WHERE id='1');
>
> So is possible to optimize this query?
>
>
> Regards
> ReynierPM
> 4to. año Ing. Informática
> Usuario registrado de Linux: #310201
> *
> El programador superhéroe aprende de compartir sus conocimientos.
> Es el referente de sus compañeros. Todo el mundo va a preguntarle y él,
> secretamente, lo fomenta porque es así como adquiere su legendaria
> sabiduría: escuchando ayudando a los demás...
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>


--
Sincerely,
Hadi Rastgou
http://www.spreadfirefox.com/?q=affiliates&id=0&t=1";>Get
Firefox!


Help with query optimization & query SUM

2006-02-06 Thread Reynier Perez Mira
Hi:
I'm develop a simple shopping cart. I have this two tables:
carro_de_compras
--
IDU int(11) NOT NULL
IDA int(11) NOT NULL
CantidadDeArticulos int(11) NOT NULL

os_articulo
--
IDA int(11) NOT NULL auto_increment,
IDC int(11) NOT NULL default '0',
ANombre varchar(200) NOT NULL default '', ADescripcion text, ACantidad int(11) 
NOT NULL default '0', AImagen varchar(50) default NULL, IDU int(11) NOT NULL 
default '0', APrecio float(6,2) default NULL, KEY AI_IDA (`IDA`)

Before ask let me explain some things. As you can see in the tables I have the 
same field IDU in both tables. So in first(table carro_de_compras) it means is 
user ID loged on ecommerce system, the second is the user ID who upload 
articles for sale. Something like eBay in wich you can sale and buy at every 
time. The arrive the point in wich I need to optimize queries:

PHP Code:
-
$sql = mysql_query("SELECT * FROM carro_de_compras"); 
$sresultado = mysql_fetch_assoc($sql);

$query = mysql_query("SELECT * FROM os_articulo WHERE 
(IDA='".$sresultado['IDA']."')"); 
while ($record = mysql_fetch_assoc($query)) {  
 $productos[] = $record; 
}

The question for this problem is: exists any way to optimize this query and 
leave only in one line? I read in MySQL doc about it and found some about JOIN 
but I can't understand how it works. Maybe because I'm cuban and not understand 
english as well as I want.

The other questions is how to add some values to a field. For example:
$sql = mysql_query("UPDATE table SET value=value+1" WHERE id='1');

For do this query I do this:
$sql = mysql_query("SELECT value FROM table WHERE id='1'");
$result = mysql_query($sql);
$update = mysql_query("UPDATE table SET (value='".$result['value']."' + 1) 
WHERE id='1');

So is possible to optimize this query?


Regards
ReynierPM
4to. año Ing. Informática
Usuario registrado de Linux: #310201
*
El programador superhéroe aprende de compartir sus conocimientos. 
Es el referente de sus compañeros. Todo el mundo va a preguntarle y él, 
secretamente, lo fomenta porque es así como adquiere su legendaria
sabiduría: escuchando ayudando a los demás... 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: AUTOINCREMENT / UNIQUE Behavior [Newbie Question]

2006-02-06 Thread Dan Nelson
In the last episode (Feb 06), David T. Ashley said:
> I remember in MySQL that you can define an integer table field as
> AUTOINCREMENT and UNIQUE (I might have the specific keywords wrong,
> but everyone will know what I mean).
> 
> In the life of a database where there are frequent additions and
> deletions, 2^32 isn't that large of a number.
> 
> When the integer field reaches 2^32-1 or whatever the upper limit is,
> what happens then?  Will it try to reuse available values from
> records that have been deleted?  Or is it always an error?

It will roll over and return a "duplicate key" error on the first
insert of a low-numbered value that still exists.  If you think you're
going to generate more than 2 billion records, use a BIGINT which will
never roll over (well, if you inserted 2 billion records per second, it
would roll over in ~270 years).

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



AUTOINCREMENT / UNIQUE Behavior [Newbie Question]

2006-02-06 Thread David T. Ashley
I remember in MySQL that you can define an integer table field as
AUTOINCREMENT and UNIQUE (I might have the specific keywords wrong, but
everyone will know what I mean).

In the life of a database where there are frequent additions and
deletions, 2^32 isn't that large of a number.

When the integer field reaches 2^32-1 or whatever the upper limit is, what
happens then?  Will it try to reuse available values from records that
have been deleted?  Or is it always an error?

Thanks, Dave.
---
David T. Ashley ([EMAIL PROTECTED])
Thousand Feet Consulting, LLC


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



replacement for Oracle initcap function

2006-02-06 Thread Sid Lane
I am finishing up on performing an Oraclectomy on a bunch of legacy java
code (don't ask why the DBA got stuck w/this - sore subject) and have one
outstanding problem to solve:

Oracle has a function, initcap(), which capitalizes the 1st character of
each word and lowercases the rest.  for example, initcap('ABC DEF GHI') =
'Abc Def Ghi'.

I have not found a (my)sql way to do this.  did I overlook something or do I
need to do this client side after I fetch the results?


Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-06 Thread Jan Kirchhoff
I just managed to get two identical test-servers running, both being 
slaves of my production system replicating a few databases including two 
of the heavy-use tables.
One server uses heap-tables, on the other one i changed the table-format 
to innodb.


I've had some problems with the replication but now it seems like 
everything is running - although I still don't know what the problem was/is.
I hope I'll be able to do some testing during the next days... I'll give 
more feedback later this week. Thanks for the help!


Jan



sheeri kritzer schrieb:

I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. 
The wonder of transactions, foreign keys, etc., with the speed of

memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri <[EMAIL PROTECTED]> wrote:
  

Jan,

if you make the InnoDB buffer pool big enough to hold all your data, or at
least all the 'hot data', and set ib_logfiles large as recommended at
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
InnoDB performance should be quite close to MEMORY/HEAP performance for
small SQL queries. If all the data is in the buffer pool, then InnoDB is
essentially a 'main-memory' database. It even uses automatically built hash
indexes.

This assumes that you do not bump into extensive deadlock issues. Deadlocks
can occur even with single row UPDATEs if you update indexed columns.
Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
caveats about it.

Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
- Original Message -
From: "Jan Kirchhoff" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?




Hi,

I am currently experiencing trouble getting my new mysql 5-servers
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the
longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the
heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:
  

sheeri kritzer schrieb:


No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)

  

Thanks for your hardware/database information. I will look at that
close tomorrow since I want to go home for today - it's already  9 pm
over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats
on other systems). This is part of our database with data of major
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they
change all the time... We have much more updates than selects on the
main database.
Our Application that receives the datastream writes blocks (INSERT ...
ON DUPLICATE KEY UPDATE...) with all records that changed since the
last write. It gives me debug output like "[timestamp] Wrote 19427
rows in 6 queries" every 30 seconds - and that are numbers that I can
rely on.

Jan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Moving from PowWeb to Rackspace

2006-02-06 Thread George Law
You might get a time out with phpMyAdmin

The many web hosts I have used pretty much have all used php's default
90 second exection time for php pages.

I have a zip code database with 50,000 records and had to do this import
through a ssh session
On the web server using mysqldump on the old server and "cat *.sql
|mysql ..." on the new.

Assuming PowWeb doesn't do shell accounts (very few web hosts do these
days)

Search google for "telnet.cgi"
This is a cgi script that allows you to give commands on the webserver.
This gives you a textbox
To enter your command in, click "submit" and it runs the command.  

With this, you should be able to run mysqldump to export the database,
pipe to gzip, and create a file
That you can download and upload to your rackspace server.  

You should be able to do the md5 sum like James suggests using the same
telnet tool

md5sum export.sql.gz

Then after you upload, run the same command to make sure you didn't
loose any bits in the transfer.

--
George

> -Original Message-
> From: JamesDR [mailto:[EMAIL PROTECTED] 
> Sent: Monday, February 06, 2006 11:54 AM
> To: mysql@lists.mysql.com
> Subject: Re: Moving from PowWeb to Rackspace
> 
> Brian Dunning wrote:
> > I have a bunch of databases - some are really big, 2GB - on 
> a number 
> > of different accounts at PowWeb. I am buying a Rackspace server and 
> > want to move everything over -- hopefully all in one night. 
> Can anyone 
> > suggest the best way to do this? Would it be to use the 
> Export command 
> > in phpMyAdmin?
> > 
> > --MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> > 
> > 
> > 
> 
> I'm not familiar with phpMyAdmin, but I would dump everything 
> to sql files, using the extended insert option then 
> compressing the resulting sql files. Then create a hash (MD5) 
> and ftp the files over, checking the hash on the remote 
> system, uncompressing, and importing. I do something like 
> this with my backups (all automated, except for the checking 
> of the hash on the remote system, I just store the MD5 in an 
> ascii file.)
> 
> --
> Thanks,
> James
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Moving from PowWeb to Rackspace

2006-02-06 Thread JamesDR

Brian Dunning wrote:
I have a bunch of databases - some are really big, 2GB - on a number of 
different accounts at PowWeb. I am buying a Rackspace server and want to 
move everything over -- hopefully all in one night. Can anyone suggest 
the best way to do this? Would it be to use the Export command in 
phpMyAdmin?


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





I'm not familiar with phpMyAdmin, but I would dump everything to sql 
files, using the extended insert option then compressing the resulting 
sql files. Then create a hash (MD5) and ftp the files over, checking the 
hash on the remote system, uncompressing, and importing. I do something 
like this with my backups (all automated, except for the checking of the 
hash on the remote system, I just store the MD5 in an ascii file.)


--
Thanks,
James

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Moving from PowWeb to Rackspace

2006-02-06 Thread Brian Dunning
I have a bunch of databases - some are really big, 2GB - on a number  
of different accounts at PowWeb. I am buying a Rackspace server and  
want to move everything over -- hopefully all in one night. Can  
anyone suggest the best way to do this? Would it be to use the Export  
command in phpMyAdmin?


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.0.20 vs 4.1.16 issue

2006-02-06 Thread sheeri kritzer
Can you show us the results of

SHOW CREATE TABLE tbl_nada;

preferably for the system as it was before, if you have backups from
then, and for how it is now.  I would guess that your table type
changed, or perhaps indexing did.

What do you mean by "crashes"?  What's the error message from the application?

How did you upgrade?  did you put it on a new machine?  recompile the
source code?  use a packaged binary?

-Sheeri

On 2/5/06, C.F. Scheidecker Antunes <[EMAIL PROTECTED]> wrote:
> Hello all,
>
> I was running a legacy app on a MySQL 4.0.20 server.
> The app queried the server like this "SELECT count(total) as total from
> tbl_nada"
>
> For 4.0.20 the result of this query was an integer.
>
> Running the same query on 4.1.16 returns a much larger Integer and the
> app crashes.
>
> Problem is that I CANNOT change the app.
>
> Is there any configuration that you can do to this server so that it
> treats the query as it was before?
>
> It is a Fecora Core 4 Linux box with MySQL 4.1.16
>
> I have two options: going back to a 4.0.20 server or fixing 4.1.16 somehow.
>
> Any better suggestions on how to address this issue?
>
> Thanks,
>
> C.F.
>
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Last access time of a table

2006-02-06 Thread sheeri kritzer
What table types?  The filesystem doesn't have this info for innodb or
memory tables (tablespaces, sure...)

-Sheeri

On 2/3/06, Andrew Braithwaite <[EMAIL PROTECTED]> wrote:
> Hi everyone,
>
> Does anyone know if there is a way to get the last access time from a
> mysql table through mysql commands/queries?
>
> I don't want to go to the filesystem to get this info.
>
> I understand that this could be tricky especially as we have query
> caching turned on and serve quite a few sql requests from query cache.
>
> Can anyone help?
>
> Cheers,
>
> Andrew
>
> SQL, Query
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-06 Thread sheeri kritzer
I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. 
The wonder of transactions, foreign keys, etc., with the speed of
memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri <[EMAIL PROTECTED]> wrote:
> Jan,
>
> if you make the InnoDB buffer pool big enough to hold all your data, or at
> least all the 'hot data', and set ib_logfiles large as recommended at
> http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
> InnoDB performance should be quite close to MEMORY/HEAP performance for
> small SQL queries. If all the data is in the buffer pool, then InnoDB is
> essentially a 'main-memory' database. It even uses automatically built hash
> indexes.
>
> This assumes that you do not bump into extensive deadlock issues. Deadlocks
> can occur even with single row UPDATEs if you update indexed columns.
> Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
> caveats about it.
>
> Best regards,
>
> Heikki
>
> Oracle Corp./Innobase Oy
> InnoDB - transactions, row level locking, and foreign keys for MySQL
>
> InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
> tables
> http://www.innodb.com/order.php
> - Original Message -
> From: "Jan Kirchhoff" <[EMAIL PROTECTED]>
> Newsgroups: mailing.database.myodbc
> Sent: Tuesday, January 31, 2006 1:09 PM
> Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?
>
>
> > Hi,
> >
> > I am currently experiencing trouble getting my new mysql 5-servers
> > running as slaves on my old 4.1.13-master.
> > Looks like I'll have to dump the whole 30GB-database and import it on
> > the new servers :( At this moment I
> > do no see any oppurtunity to do this before the weekend since the
> > longest time I can block any of our production
> > systems is only 2-3 hours between midnight and 2am :(
> >
> > I am still curious if Innodb could handle the load of my updates on the
> > heavy-traffic-tables since its disk-bound and
> > does transactions.
> >
> > What I would probably need is an in-memory-table without any kind of
> > locking - at least not table-locks! But there
> > is no such engine in mysql. When a cluster can handle that (although it
> > has the transaction-overhead) it would probably be
> > perfect for since it even adds high availability in a very easy way...
> >
> > Jan
> >
> > Jan Kirchhoff schrieb:
> >> sheeri kritzer schrieb:
> >>> No problem:
> >>>
> >>> Firstly, how are you measuring your updates on a single table?  I took
> >>> a few binary logs, grepped out for things that changed the table,
> >>> counting the lines (using wc) and then dividing by the # of seconds
> >>> the binary logs covered.  The average for one table was 108 updates
> >>> per second.
> >>>   I'm very intrigued as to how you came up with 2-300 updates per second
> >>> for one table. . . did you do it that way?  If not, how did you do it?
> >>>  (We are a VERY heavily trafficked site, having 18,000 people online
> >>> and active, and that accounts for the 108 updates per second.  So if
> >>> you have more traffic than that. .  .wow!)
> >>>
> >> Thanks for your hardware/database information. I will look at that
> >> close tomorrow since I want to go home for today - it's already  9 pm
> >> over here... I need beer ;)
> >>
> >> We are not running a webservice here (actually we do, too, but thats
> >> on other systems). This is part of our database with data of major
> >> stock exchanges worldwide that we deliver realtime data for.
> >> Currently that are around 900,000 quotes, during trading hours they
> >> change all the time... We have much more updates than selects on the
> >> main database.
> >> Our Application that receives the datastream writes blocks (INSERT ...
> >> ON DUPLICATE KEY UPDATE...) with all records that changed since the
> >> last write. It gives me debug output like "[timestamp] Wrote 19427
> >> rows in 6 queries" every 30 seconds - and that are numbers that I can
> >> rely on.
> >>
> >> Jan
> >>
> >>
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:
> > http://lists.mysql.com/[EMAIL PROTECTED]
> >
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Report Generator

2006-02-06 Thread George Law
Chuck,

Check this out - it's a real **simple** JSP  that just does a query and
dumps out the results to the 
web browser.
http://www.thebook-demo.com/java-server/jsp/Mysql/MysqlExample.jsp

Its been a while since I have worked with JSP - I wrote this example
several years ago (the web site
belongs to a previous employer).  At the time, I think I had to drop the
unpacked jar file for the mysql
driver into the Tomcat source directory. That is about all I remember
about Tomcat.  

--
George Law


> -Original Message-
> From: Chuck Craig [mailto:[EMAIL PROTECTED] 
> Sent: Saturday, February 04, 2006 10:23 AM
> To: MySQL-General
> Subject: Report Generator
> 
> Hi, I'm new to the list and not sure whether my question 
> belongs here or not. I'm looking for an open source program, 
> that runs on JSP, to generate reports on data in MySQL 
> databases. I've found a few myself but they run on PHP. Any 
> thoughts or advice would be very appreciated.
> 
> -Chuck Craig
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Report Generator

2006-02-06 Thread Gleb Paharenko
Hello.

I'm not sure, but have a look here:
  http://dev.mysql.com/tech-resources/articles/dba-dashboard.html




Chuck Craig wrote:
> Hi, I'm new to the list and not sure whether my question belongs here or
> not. I'm looking for an open source program, that runs on JSP, to
> generate reports on data in MySQL databases. I've found a few myself but
> they run on PHP. Any thoughts or advice would be very appreciated.
> 
> -Chuck Craig


-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.0.20 vs 4.1.16 issue

2006-02-06 Thread Martijn Tonies
Hi,

> I was running a legacy app on a MySQL 4.0.20 server.
> The app queried the server like this "SELECT count(total) as total from
> tbl_nada"
>
> For 4.0.20 the result of this query was an integer.
>
> Running the same query on 4.1.16 returns a much larger Integer and the
> app crashes.
>
> Problem is that I CANNOT change the app.
>
> Is there any configuration that you can do to this server so that it
> treats the query as it was before?

And then what? If the value itself is larger than an Integer, what are
you expecting to get? Silently truncate the value to fit inside an integer?

How would that make your app any better?


Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle & MS SQL
Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]