Hi,
I just wanted to remark, that the Event feature is already working in server
version 5.1.37 (installed on Debian).
In tech resources is mentioned that this feature would be available since
version 5.1.6 (see
http://dev.mysql.com/tech-resources/articles/event-feature.html). So I
wanted t
Hi,
Have you checked out our tool "Database Workbench" yet? It
includes a Schema Compare tool that generates a script.
See www.upscene.com
With regards,
Martijn Tonies
Upscene Productions
http://www.upscene.com
Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL,
>> What is the best way to synchronize the database schemas? db_test has had a
>> few indexes and constraints added to several tables and I need to generate a
>> MySQL script to apply these changes to db_prod.
Ruby on Rails comes with tools to dump and load db schemas;
it's trivial to create a sk
On 1/21/10 12:03 PM, "Price, Randall" wrote:
> I have a two databases, one in a production environment (let's call it
> db_prod) and the other in a testing environments (Let's call it db_test).
>
> What is the best way to synchronize the database schemas? db_test has had a
> few indexes and con
If you only need very fast INSERTs, you might try to use ARCHIVE
storage engine (
http://dev.mysql.com/tech-resources/articles/storage-engine.html ). It
was developed for handling INSERTs very fast. Many peoples use it, for
example, for storing logs.
> Hi list,
>
> I want to insert 1 records/s
One of the ways is to keep db scema under revision control system. And
update it every N minutes.
% crontab -l
0 * * * * mysqldump testdb --no-data > testdb_schema.sql && svn ci -m
"db schema: `date`" > /dev/null
> I have a two databases, one in a production environment (let's call it
> db_prod)
As Mr. Withers also indicated, I meant frefixing the *filename* of each
change script with the date, and of course also keeping all changes in a
single (or limited set of) folder(s), so you can easily collect and
sequentially apply all of them when release time comes.
Also, it is preferable that d
non-linearity in the insert rate means you have indexes on some columns.
depending on your situation, mysql can be more efficient if drop those
indexes, do bulk inserts, and then add the indexes again.
On 1/23/10 5:02 AM, "Krishna Chandra Prajapati"
wrote:
> Hi shawn,
>
> As the data grows to
Hi shawn,
As the data grows to 20 millions the insert rate will become very slow. In
such case i am getting 2000 insert/seconds only.
Therefore my objective is not achieved.
I cannot slow up the insert rate of 10,000/second. I am getting data
(inserted by users at this rate)
Is there any other