ANN: Upscene releases Database Workbench 5.6.0

2019-02-26 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

“ Database Workbench 5.6.0 "

This new release brings you the Command Line Data Pump & Favorites feature (Pro 
Edition) and a few important bugfixes. 

Version 5.5 brought you support for the latest versions of supported database 
systems, that includes PostgreSQL 11, InterBase 2017 and MySQL 8.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.6.0=12
and for version 5.5.0
http://www.upscene.com/go/?go=tracker=5.5.0=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Database Workbench 5.5.0 released and free Lite Edition for MySQL

2019-01-15 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.5.0

Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

" Database Workbench 5.5.0 "

This new release brings you support for the latest versions of supported 
database systems, that includes PostgreSQL 11, InterBase 2017 and MySQL 8.

There's also a new release of the free Lite Edition for MySQL: version 5.4.6 
has been made available today.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.5.0=12
and for version 5.4.x
http://www.upscene.com/go/?go=tracker=5.4.x=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Upscene releases Database Workbench 5.4.6

2018-10-23 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.4.6

Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

“ Database Workbench 5.4.6 "

This is a bugfix release, previous releases included support for MariaDB 10.1 
and 10.2, MySQL 5.7 and Azure, a custom report writer, a renewed stored routine 
debugger with full support for Firebird 3 Stored Functions and Packages and 
several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.4.x=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

[ANN] [Qt ORM] QxOrm 1.4.5 and QxEntityEditor 1.2.3 released : support MongoDB database and code source now on GitHub

2018-09-06 Thread QxOrm contact
Hello,

*QxOrm library 1.4.5* and *QxEntityEditor application 1.2.3* just released
: https://www.qxorm.com/


*QxOrm library 1.4.5 changes log :*
* - Support MongoDB database : QxOrm library becomes a C++/Qt Object
Document Mapper ODM library !- For more details about MongoDB integration,
see QxOrm manual (https://www.qxorm.com/qxorm_en/manual.html#manual_95
<https://www.qxorm.com/qxorm_en/manual.html#manual_95>) and new sample
project available in ./test/qxBlogMongoDB/ directory- QxOrm library is now
available on GitHub (official repository) : https://github.com/QxOrm/QxOrm
<https://github.com/QxOrm/QxOrm>- Fix an issue in qx::IxSqlQueryBuilder
class when QxOrm library is used in a multi-thread environment- Support
latest version of boost (1.66)- Update boost portable binary
serialization classes to version 5.1 (provided by
https://archive.codeplex.com/?p=epa <https://archive.codeplex.com/?p=epa>)-
Fix an issue building SQL query for Oracle database (doesn't support AS
keyword for table alias)- Improve qx::QxClassX::registerAllClasses()
function : possibility to initialize all relations (useful to work with
introspection engine)- Improve qx::IxPersistable interface : provide new
methods toJson() / fromJson()- Improve documentation/website :
change http://www.qxorm.com
<http://www.qxorm.com/> by https://www.qxorm.com
<http://www.qxorm.com/> everywhere- Fix fetching relations with soft delete
putting SQL condition in the JOIN part instead of WHERE part- Fix SQL
generator for Oracle database : use new limit/pagination syntax (version
Oracle > 12.1)- Improve SQL generator interface : add
'onBeforeSqlPrepare()' method to modify/log SQL queries in custom classes-
Add an option in qx::QxSqlDatabase class to format SQL query
(pretty-printing) before logging it (can be customized creating a
qx::dao::detail::IxSqlGenerator sub-class)- Fix an issue with
boost/std::optional (to manage NULL database values) and some databases :
if optional is empty, then create a NULL QVariant based on
QVariant::Type- Add an option in qx::QxSqlDatabase class to insert square
brackets (or any other delimiters) in SQL queries for table name and/or
column name (to support specific database keywords)- Improve introspection
engine : add getType() method in qx::IxDataMember interface to get C++ type
of a property dynamically- Improve qx::QxSqlDatabase singleton settings
class to make easier working with several databases : now there are 3
levels of settings : global >> per thread >> per database (see
'bJustForCurrentThread' and 'pJustForThisDatabase' optional parameters in
all set() methods)- Fix QxOrm.pri for MinGW compiler on Windows : an
issue could occurred to export some symbols from shared library (some Qt
signals for example)- Add an option in qx::QxSqlDatabase singleton class to
display only slow SQL queries (see setTraceSqlOnlySlowQueriesDatabase() and
setTraceSqlOnlySlowQueriesTotal() methods)*

*QxEntityEditor application 1.2.3 changes log :*
*- Fix a crash which appears sometimes with complex database schema to draw
relationships (orthogonal way)- Improve QxEntityEditor command line
parameters : possibility to import/export without using GUI (useful to
manage a Jenkins server for example)- For more details about command line
parameters, go to QxEntityEditor documentation
: https://www.qxorm.com/qxorm_en/manual_qxee.html#qxee_command_line
<https://www.qxorm.com/qxorm_en/manual_qxee.html#qxee_command_line>*

You can download latest version of QxOrm library and QxEntityEditor
application on QxOrm website : https://www.qxorm.com/

Regards,


Lionel Marty - QxOrm library


Re: ANN: Upscene releases Database Workbench 5.4.4

2018-07-09 Thread webmaster




Sent from my Samsung Galaxy smartphone.
 Original message From: "Martijn Tonies (Upscene Productions)" 
 Date: 09/07/2018  13:17  (GMT+00:00) To: 
firebird-to...@yahoogroups.com, mysql@lists.mysql.com Subject: ANN: Upscene 
releases Database Workbench 5.4.4 
Upscene Productions is proud to announce the availability of the next version 
of the popular multi-DBMS development tool:

" Database Workbench 5.4.4 “

This release includes support for MariaDB 10.1 and 10.2, MySQL 5.7 and Azure 
and the latest version of PostgreSQL. Previous releases included a custom 
report writer, a renewed stored routine debugger with full support for Firebird 
3 Stored Functions and Packages and several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.4.x=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Upscene releases Database Workbench 5.4.4

2018-07-09 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of the next version 
of the popular multi-DBMS development tool:

" Database Workbench 5.4.4 “

This release includes support for MariaDB 10.1 and 10.2, MySQL 5.7 and Azure 
and the latest version of PostgreSQL. Previous releases included a custom 
report writer, a renewed stored routine debugger with full support for Firebird 
3 Stored Functions and Packages and several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.4.x=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Upscene releases Database Workbench 5.4.2

2018-03-06 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.4.2

Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

" Database Workbench 5.4.2 “

This release includes support for MariaDB 10.1 and 10.2, MySQL 5.7 and Azure 
and the latest version of PostgreSQL. Previous releases included a custom 
report writer, a renewed stored routine debugger with full support for Firebird 
3 Stored Functions and Packages and several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.4.x=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Upscene releases Database Workbench 5.4.0

2018-02-05 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

" Database Workbench 5.4.0 "

This release includes support for MariaDB 10.1 and 10.2, MySQL 5.7 and Azure 
and the latest version of PostgreSQL. Previous releases included a custom 
report writer, a renewed stored routine debugger with full support for Firebird 
3 Stored Functions and Packages and several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.4.0=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Upscene releases Database Workbench 5.3.4

2017-10-27 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of the next version 
of the popular multi-DBMS development tool:

" Database Workbench 5.3.4 "

This release includes bugfixes on version 5.3 which included a custom report 
writer, increased support for PostgreSQL, a renewed stored routine debugger 
with full support for Firebird 3 Stored Functions and Packages and adds several 
other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!


"Version 5 included many new features", says Martijn Tonies, founder of Upscene 
Productions. 
"It added code editor features, has diagramming improvements, multiple 
editions, is fully HiDPI aware and offers secure connections to PostgreSQL, 
MySQL and MariaDB. The most recent version adds a custom report writer and 
increased support for PostgreSQL, as requested by many of our customers."

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.3.4=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Upscene releases Database Workbench 5.3.2

2017-06-30 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

“ Database Workbench 5.3.2 “

This release includes bugfixes on version 5.3 which included a custom report 
writer, increased support for PostgreSQL, a renewed stored routine debugger 
with full support for Firebird 3 Stored Functions and Packages and adds several 
other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!



"Version 5 included many new features", says Martijn Tonies, founder of Upscene 
Productions. 
"It added code editor features, has diagramming improvements, multiple 
editions, is fully HiDPI aware and offers secure connections to PostgreSQL, 
MySQL and MariaDB. The most recent version adds a custom report writer and 
increased support for PostgreSQL, as requested by many of our customers."

Here's the full list of changes
http://www.upscene.com/go/?go=tracker=5.3.2=12

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Upscene releases Database Workbench 5.3

2017-04-19 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

" Database Workbench 5.3 "

This release includes a custom report writer, increased support for PostgreSQL, 
a renewed stored routine debugger with full support for Firebird 3 Stored 
Functions and Packges and adds several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!



"Version 5 included many new features", says Martijn Tonies, founder of Upscene 
Productions. 
"It added code editor features, has diagramming improvements, multiple 
editions, is fully HiDPI aware and offers secure connections to PostgreSQL, 
MySQL and MariaDB. The most recent version adds a custom report writer and 
increased support for PostgreSQL, as requested by many of our customers."


For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Database Workbench 5.2 now includes PostgreSQL support

2016-09-08 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce the availability of
the next version of the popular multi-DBMS development tool:

" Database Workbench 5.2 "

This release includes support for PostgreSQL and adds several other features.

Database Workbench 5 comes in multiple editions with different pricing models, 
there's always a version that suits you!



"Version 5 included many new features", says Martijn Tonies, founder of Upscene 
Productions. 
"It added code editor features, has diagramming improvements, multiple 
editions, is fully HiDPI aware and offers tunnelling for MySQL and MariaDB 
connections.
The most recent version adds support for PostgreSQL, as requested by many of 
our customers."


For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB, PostgreSQL and InterBase, comes in multiple editions and 
is licensed based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 12 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 14 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.



Re: filename-safe conversion of database-/tablenames

2016-08-11 Thread Johan De Meersman


- Original Message -
> From: "Simon Fromme" <fro...@tralios.de>
> Subject: filename-safe conversion of database-/tablenames
> 
> I need to convert both the names of databases and tables in a
> filename-safe way (escaping "/" and other characters as in [1]). The

I don't know what strange table names you're expecting, but under *nix almost 
anything short of / (directory separator) is valid in a filename, even the 
wildcard characters ? and *.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



filename-safe conversion of database-/tablenames

2016-08-09 Thread Simon Fromme

Hello,

In order to do MySQL-dumps to a file on a Linux system under

$BACKUP_DIR/$DB_NAME/$TABLE_NAME.sql

I need to convert both the names of databases and tables in a 
filename-safe way (escaping "/" and other characters as in [1]). The 
mapping of MySQL database/table name to the according filenames should 
preferably be the same that MySQL (or the particular DB engine) uses. If 
that's not possible the mapping should at least be injective and 
preferably be human readable.


I found out that MySQL is using the C-function 
tablename_to_filename(...) [2] internally but didn't find a way in which 
it exposes this conversion function to the outside.


Did I overlook some way this could be done? If not, would this be a 
feature that a future version of MySQL should provide?



Best regards
Simon Fromme


[1]: https://dev.mysql.com/doc/refman/5.7/en/identifier-mapping.html
[2]: http://osxr.org:8080/mysql/source/sql/sql_table.cc

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: Beta of the Database Designer for MySQL 2.1.8 is out!

2016-07-19 Thread Aleksander Andreev
This beta introduces brand new SSH tunneling machinery out of the box! No
3rd party solutions should be installed. Also, there is a more flexible
solution for columns default values implemented. From now and on developer
may control if these values are quoted or not, allowing set empty strings
or function values as defaults.

Full change log:
[!] Brand new SSH tunneling introduced
[+] Option to quote or not default values for domains and columns added
[*] Model file format changed. Notice generated for old models on Open

You're welcome to download the Database Designer for MySQL 2.1.8-beta right
now at:
http://microolap.com/products/database/mysql-designer/download/

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at
http://www.microolap.com/support/


-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


Re: call the appropriate and correct database

2016-06-20 Thread Johan De Meersman


- Original Message -
> From: "HaidarPesebe" <haidarpes...@gmail.com>
> Subject: call the appropriate and correct database

> How do I call first database table as follows :
> 
> id | country | province | distric | cost
> 
> 1 | USA | Alanama | distrik | 20
> 2 | USA | Alabama | distrik2 | 22
> 3 | USA | Alabama | distrik3 | 22
> 4 | France | Paris | disrik4 | 30

You want to normalize your data by splitting that out into separate tables for 
country, province and district, and referencing the higher level instead. You'd 
get something like this:

| COUNTRIES |
| c_id | name   |
|1 | USA|
|2 | France |

| PROVINCES |
| p_id | c_id | name|
|1 |1 | Alabama |
|2 |1 | Washington  |
|3 |2 | Paris   |
|4 |2 | Nord-Calais |

| DISTRICTS  |
| d_id | p_id | name |
|1 |1 | distrik  |
|2 |1 | distrik2 |
|3 |1 | distrik3 |
|4 |3 | distrik4 |


That way, you can fill your dropdowns by a simple, fast select statement. 
Whenever you need more complex bits, you can just join the tables as necessary.

Google for "database normalisation" for more info about this practice, and find 
information about "foreign keys constraints" to ensure consistency in your 
database.



-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



call the appropriate and correct database

2016-06-19 Thread Abdullah Bahar

How do I call first database table as follows :

id | country | province | distric | cost

1 | USA | Alanama | distrik | 20
2 | USA | Alabama | distrik2 | 22
3 | USA | Alabama | distrik3 | 22
4 | France | Paris | disrik4 | 30

now I use 3 kinds of calls to select a form below. I am using Jquery 
chain (no problem) :


1. SELECT country from table GROUP BY country ORDER BY country ASC

Country


2. SELECT province from table GROUP BY province ORDER BY province ASC

Province


3. SELECT distric from table GROUP BY distric ORDER BY distric ASC

distric


What I want to call the database whether it is appropriate ? or is there 
another way that is more simple .

Loading page is very long .
Kindly enlighten


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: Database Designer for MySQL 2.1.7 released!

2016-05-19 Thread Aleksander Andreev
New maintenance release introduces JSON type support.

Full change log:
[!] JSON type support added!
[-] "Rare EStringListError during Table editor opening" bug fixed

You're welcome to download the Database Designer for MySQL 2.1.7 right now
at:
http://microolap.com/products/database/mysql-designer/download/

Login to your private area on our site at http://microolap.com/my/keys/ to
obtain your key if you have a license.

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at http://www.microolap.com/support/

-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


ANN: Upscene releases Database Workbench 5.1.12

2016-03-23 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.1.12

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

“ Database Workbench 5.1.12 “

This release adds a few small features and useful enhancements.

The full version 5.1.x change log is available here:
http://www.upscene.com/go/?go=tracker=5.1.x=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.

Version 5.1 includes SQL Azure and Firebird 3 support, additional data export 
functionality, improved Oracle XML and Object-type support, Diagramming 
enhancements 
and new printing features as well as improvements in other areas.

http://www.upscene.com/news/item/20160322


Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server, 
SQL Azure,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Oracle Certified Professional, MySQL 5.6 Database Administrator

2016-03-09 Thread Lukas Lehner
Hi

when will be the exam "Oracle Certified Professional, MySQL 5.6 Database
Administrator" for MySQL 5.7?

Lukas


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Ah, ok, if I understand correctly within this context every record in the
one table _should_ have a unique identifier.  Please verify this is the
case, though, if for example the primary key is an auto increment what I'm
going to suggest is not good and Really Bad Things will, not may, happen.

If you want to do this all in MySQL, and IFF the records are ensured to be
*globally unique*, then what I suggested previously would work but isn't
necessary (and is actually dangerous if global record uniqueness is not
definite).  Uou _could_ do a standard mysqldump (use flags to do data only,
no schema) and on the importing server it will insert the records and if
there are duplicates records they will fail. If there is a chance the
records aren't unique, or if you want to be extra super safe (good idea
anyway), you can add triggers on the ingest server to ensure
uniqueness/capture failures and record them in another table for analysis
or perhaps even to immediate data remediation (update key) and do insert.

Now, for me, using triggers or other business-logic-in-database features is
a code smell.  I loath putting business logic in databases as they tend to
be non-portable and are hard to troubleshoot for people behind me that is
expecting to have logic in code.  Since you're having to script this
behavior out anyway, if it were me I would dump the data in the table to
CSV or similar using INSERT INTO OUTFILE rather than mysqldump, ship the
file, and have a small php script on cron or whatever ingest it, allowing
for your business logic for data validate/etc to be done in code (IMO where
it belongs).

S



On Mon, Feb 29, 2016 at 12:12 PM, lejeczek <pelj...@yahoo.co.uk> wrote:

> On 29/02/16 16:32, Steven Siebert wrote:
>
>> What level of control do you have on the remote end that is
>> collecting/dumping the data?  Can you specify the command/arguments on how
>> to dump?  Is it possible to turn on binary logging and manually ship the
>> logs rather than shipping the dump, effectively manually doing
>> replication?
>>
> in an overview it's a simple php app, a form of a questionnaire that
> collects user manual input, db backend is similarly simple, just one table.
> Yes I can operate mysqldump command but nothing else, I do not have
> control over mysql config nor processes.
>
> It's one of those cases when for now it's too late and you are only
> thinking - ough... that remote box, if compromised would be good to have
> only a minimal set of data on it.
>
> So I can mysqldump any way it'd be best and I'd have to insert ideally not
> replacing anything, instead aggregating, adding data.
> I think developers took care of uniqueness of the rows, and constructed it
> in conformity with good design practices.
>
> What I'm only guessing is when I lock, dump and remove then insert,
> aggregate could there be problems with keys? And no data loss during
> dump+removal?
>
> thanks for sharing your thoughts.
>
>
>> I agree with others, in general this approach smells like a bad idea.
>> However, updating data from a remote system in batch is quite common,
>> except often it's done at the application level polling things like web
>> services and perhaps some business logic to ensure integrity is
>> maintained.  Attempting to do it within the constructs of the database
>> itself is understandable, but there are risks when not adding that "layer"
>> of logic to ensure state is exactly as you expect it during a merge.
>>
>> At risk of giving you too much rope to hang yourself: if you use mysqldump
>> to dump the database, if you use the --replace flag you'll convert all
>> INSERT statements to REPLACE, which when you merge will update or insert
>> the record, effectively "merging" the data.  This may be one approach you
>> want to look at, but may not be appropriate depending on your specific
>> situation.
>>
>> S
>>
>>
>>
>> On Mon, Feb 29, 2016 at 11:12 AM, lejeczek <pelj...@yahoo.co.uk> wrote:
>>
>> On 29/02/16 15:42, Gary Smith wrote:
>>>
>>> On 29/02/2016 15:30, lejeczek wrote:
>>>>
>>>> On 28/02/16 20:50, lejeczek wrote:
>>>>>
>>>>> fellow users, hopefully you experts too, could help...
>>>>>>
>>>>>> ...me to understand how, and what should be the best practice to dump
>>>>>> database, then drop it and merge the dumps..
>>>>>> What I'd like to do is something probably many have done and I wonder
>>>>>> how it's done best.
>>>>>> A box will be dumping a database (maybe? tables if it's better) then
>>>>>> dropping (purging the data) it and on a different sys

Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Totally with you, I had to get up and wash my hands after writing such
filth =)

On Mon, Feb 29, 2016 at 12:14 PM, Gary Smith <li...@l33t-d00d.co.uk> wrote:

> On 29/02/2016 16:32, Steven Siebert wrote:
>
>>
>> At risk of giving you too much rope to hang yourself: if you use
>> mysqldump to dump the database, if you use the --replace flag you'll
>> convert all INSERT statements to REPLACE, which when you merge will update
>> or insert the record, effectively "merging" the data.  This may be one
>> approach you want to look at, but may not be appropriate depending on your
>> specific situation.
>>
>> I'd considered mentioning this myself, but this was the root of my
> comment about integrity - if the original database or tables are dropped,
> then the replace command will cause the data to poo all over the original
> dataset. As you mentioned in your (snipped) reply, this can go badly wrong
> in a short space of time without the correct controls in place. Even if
> they are in place, I'd have trouble sleeping at night if this were my
> circus.
>
> Gary
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 16:32, Steven Siebert wrote:


At risk of giving you too much rope to hang yourself: if you use 
mysqldump to dump the database, if you use the --replace flag you'll 
convert all INSERT statements to REPLACE, which when you merge will 
update or insert the record, effectively "merging" the data.  This may 
be one approach you want to look at, but may not be appropriate 
depending on your specific situation.


I'd considered mentioning this myself, but this was the root of my 
comment about integrity - if the original database or tables are 
dropped, then the replace command will cause the data to poo all over 
the original dataset. As you mentioned in your (snipped) reply, this can 
go badly wrong in a short space of time without the correct controls in 
place. Even if they are in place, I'd have trouble sleeping at night if 
this were my circus.


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 16:32, Steven Siebert wrote:

What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?
in an overview it's a simple php app, a form of a 
questionnaire that collects user manual input, db backend is 
similarly simple, just one table.
Yes I can operate mysqldump command but nothing else, I do 
not have control over mysql config nor processes.


It's one of those cases when for now it's too late and you 
are only thinking - ough... that remote box, if compromised 
would be good to have only a minimal set of data on it.


So I can mysqldump any way it'd be best and I'd have to 
insert ideally not replacing anything, instead aggregating, 
adding data.
I think developers took care of uniqueness of the rows, and 
constructed it in conformity with good design practices.


What I'm only guessing is when I lock, dump and remove then 
insert, aggregate could there be problems with keys? And no 
data loss during dump+removal?


thanks for sharing your thoughts.


I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek <pelj...@yahoo.co.uk> wrote:


On 29/02/16 15:42, Gary Smith wrote:


On 29/02/2016 15:30, lejeczek wrote:


On 28/02/16 20:50, lejeczek wrote:


fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then
dropping (purging the data) it and on a different system that dump swill be
inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that
source data will be dropped/purged on regular basis, but before a drop, a
dump which later will be used to sort of reconstruct that same database.

How do you recommend to do it? I'm guessing trickiest bit might this
reconstruction part, how to merge dumps safely, naturally while maintaining
consistency & integrity?
Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider -

should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first
cleansed then some data was collected) to merge data again would it work
and merge that newly collected data with what's already in the database


This sounds like a remarkably reliable way to ensure no data integrity.
What exactly are you trying to achieve? Would replication be the magic word
you're after?

I realize this all might look rather like a bird fiddling with a worm

instead of lion going for quick kill. I replicate wherever I need and can,
here a have very little control over one end.
On that end with little control there is one simple database, which data
I'll need to be removed on regular basis, before removing I'll be dumping
and I need to use those dumps to add, merge, aggregate data to a database
on the other end, like:
today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then collect some
and again, dump then remove
and these dumps should reconstruct the database on the other box.

Pointers on what to pay the attention to, how to test for consistency &
integrity, would be of great help.


Gary



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?

I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek <pelj...@yahoo.co.uk> wrote:

> On 29/02/16 15:42, Gary Smith wrote:
>
>> On 29/02/2016 15:30, lejeczek wrote:
>>
>>> On 28/02/16 20:50, lejeczek wrote:
>>>
>>>> fellow users, hopefully you experts too, could help...
>>>>
>>>> ...me to understand how, and what should be the best practice to dump
>>>> database, then drop it and merge the dumps..
>>>> What I'd like to do is something probably many have done and I wonder
>>>> how it's done best.
>>>> A box will be dumping a database (maybe? tables if it's better) then
>>>> dropping (purging the data) it and on a different system that dump swill be
>>>> inserted/aggregated into the same database.
>>>> It reminds me a kind of incremental backup except for the fact that
>>>> source data will be dropped/purged on regular basis, but before a drop, a
>>>> dump which later will be used to sort of reconstruct that same database.
>>>>
>>>> How do you recommend to do it? I'm guessing trickiest bit might this
>>>> reconstruction part, how to merge dumps safely, naturally while maintaining
>>>> consistency & integrity?
>>>> Actual syntax, as usually any code examples are, would be best.
>>>>
>>>> many thanks.
>>>>
>>>>
>>>> I guess dropping a tables is not really what I should even consider -
>>> should I just be deleting everything from tables in order to remove data?
>>> And if I was to use dumps of such a database (where data was first
>>> cleansed then some data was collected) to merge data again would it work
>>> and merge that newly collected data with what's already in the database
>>>
>> This sounds like a remarkably reliable way to ensure no data integrity.
>> What exactly are you trying to achieve? Would replication be the magic word
>> you're after?
>>
>> I realize this all might look rather like a bird fiddling with a worm
> instead of lion going for quick kill. I replicate wherever I need and can,
> here a have very little control over one end.
> On that end with little control there is one simple database, which data
> I'll need to be removed on regular basis, before removing I'll be dumping
> and I need to use those dumps to add, merge, aggregate data to a database
> on the other end, like:
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then collect some
> and again, dump then remove
> and these dumps should reconstruct the database on the other box.
>
> Pointers on what to pay the attention to, how to test for consistency &
> integrity, would be of great help.
>
>
> Gary
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Johan De Meersman
- Original Message -
> From: "lejeczek" <pelj...@yahoo.co.uk>
> Subject: Re: dump, drop database then merge/aggregate
> 
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then
> collect some and again, dump then remove
> and these dumps should reconstruct the database on the other
> box.

It sounds like a horrible mess, to be honest. It's also pretty hard to 
recommend possible paths without knowing what's inside. Is it an option for you 
to simply import the distinct dumps into different schemas? That way there 
would be no need for merging the data, you just query the particular dataset 
you're interested in.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 15:42, Gary Smith wrote:

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the 
dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for 
the fact that source data will be dropped/purged on 
regular basis, but before a drop, a dump which later 
will be used to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest 
bit might this reconstruction part, how to merge dumps 
safely, naturally while maintaining consistency & 
integrity?
Actual syntax, as usually any code examples are, would 
be best.


many thanks.


I guess dropping a tables is not really what I should 
even consider - should I just be deleting everything from 
tables in order to remove data?
And if I was to use dumps of such a database (where data 
was first cleansed then some data was collected) to merge 
data again would it work and merge that newly collected 
data with what's already in the database
This sounds like a remarkably reliable way to ensure no 
data integrity. What exactly are you trying to achieve? 
Would replication be the magic word you're after?


I realize this all might look rather like a bird fiddling 
with a worm instead of lion going for quick kill. I 
replicate wherever I need and can, here a have very little 
control over one end.
On that end with little control there is one simple 
database, which data I'll need to be removed on regular 
basis, before removing I'll be dumping and I need to use 
those dumps to add, merge, aggregate data to a database on 
the other end, like:

today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then 
collect some and again, dump then remove
and these dumps should reconstruct the database on the other 
box.


Pointers on what to pay the attention to, how to test for 
consistency & integrity, would be of great help.



Gary




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump 
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder 
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then 
dropping (purging the data) it and on a different system that dump 
swill be inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that 
source data will be dropped/purged on regular basis, but before a 
drop, a dump which later will be used to sort of reconstruct that 
same database.


How do you recommend to do it? I'm guessing trickiest bit might this 
reconstruction part, how to merge dumps safely, naturally while 
maintaining consistency & integrity?

Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider - 
should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first 
cleansed then some data was collected) to merge data again would it 
work and merge that newly collected data with what's already in the 
database
This sounds like a remarkably reliable way to ensure no data integrity. 
What exactly are you trying to achieve? Would replication be the magic 
word you're after?


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for the 
fact that source data will be dropped/purged on regular 
basis, but before a drop, a dump which later will be used 
to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest bit 
might this reconstruction part, how to merge dumps safely, 
naturally while maintaining consistency & integrity?
Actual syntax, as usually any code examples are, would be 
best.


many thanks.


I guess dropping a tables is not really what I should even 
consider - should I just be deleting everything from tables 
in order to remove data?
And if I was to use dumps of such a database (where data was 
first cleansed then some data was collected) to merge data 
again would it work and merge that newly collected data with 
what's already in the database?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



dump, drop database then merge/aggregate

2016-02-28 Thread lejeczek

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and 
I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated into 
the same database.
It reminds me a kind of incremental backup except for the 
fact that source data will be dropped/purged on regular 
basis, but before a drop, a dump which later will be used to 
sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest bit 
might this reconstruction part, how to merge dumps safely, 
naturally while maintaining consistency & integrity?

Actual syntax, as usually any code examples are, would be best.

many thanks.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Upscene releases Database Workbench 5.1.10

2015-12-16 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

Database Workbench 5.1.10 

This release adds a few small features and includes Windows Terminal Server 
support.

The full version 5.1.x change log is available here:
http://www.upscene.com/go/?go=tracker=5.1.x=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.


Version 5.1 includes SQL Azure and Firebird 3 support, additional data export 
functionality, improved Oracle XML and Object-type support, Diagramming 
enhancements 
and new printing features as well as improvements in other areas.

http://www.upscene.com/news/item/20151215


Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server, 
SQL Azure,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.


ANN: 20% discount on Database Workbench Basic or Pro!

2015-11-04 Thread Martijn Tonies (Upscene Productions)
Dear reader,

We're offering a 20% Mid Autumn Discount on our database design
and development product "Database Workbench", available in Basic
and Pro edition.

Use coupon code MAD15 while ordering, valid until November the 9th.

You can find more information about Database Workbench here:
http://www.upscene.com/database_workbench/



Thank you.



With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

ANN: 20% discount on Database Workbench Basic or Pro!

2015-10-23 Thread Martijn Tonies
Dear reader,

We're offering a 20% Mid Autumn Discount on our database design
and development product "Database Workbench", available in Basic
and Pro edition.

Use coupon code MAD15 while ordering.

You can find more information about Database Workbench here:
http://www.upscene.com/database_workbench/



Thank you.



With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!

Re: When to create a new database

2015-10-14 Thread Johan De Meersman
- Original Message -
> From: "Ron Piggott" <ron.pigg...@actsministries.org>
> Subject: Re: When to create a new database
>
> I would lean towards keeping it all together because of the speed
> decrease between connecting to different databases.

Heh, that consideration is a matter of semantics, and I'd guess you're used to 
Oracle? :-p

What OP (presumably) meant was "in different schemas". Terminology is 
important, y'all.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new database

2015-10-14 Thread Johan De Meersman
- Original Message -
> From: "Reindl Harald" <h.rei...@thelounge.net>
> Subject: Re: When to create a new database
> 
> it makes zero sense since you can use different users for the same
> database down to table and even column permissions

No, it does make some sense in the case where part of the dataset is going to 
be accessed by multiple independent applications, and I think the generic 
sports bits may actually fit that. It's cleaner from a design point of view, 
and it prevents accidentally deleting that data when the original application 
is taken out of production.

In my particular environment, we have quite a few of these generic databases; 
although from similar design ideology, they are also accessed only through 
their own REST interfaces, and not directly.

/Johan

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new database

2015-10-12 Thread shawn l.green



On 10/10/2015 10:28 AM, Richard Reina wrote:

If I were keeping tract of high school sports statistics and thus designed
the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a separate
database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different databases?

Thanks



The general rule is: are the tables all closely related (as in used for 
the same business purpose)?  If they are, and possibly interdependent, 
then they normally belong in the same database.


However if some of of them are a derivatives of the others the it may 
make logical sense for the derivative tables to reside in their own 
database.


example: one database may be your "raw" data: every play, every 
statistic.  The other database may be your "summary" data: the 
meta-statistics you get by combining or summarizing the raw data. 
Querying your already-summarized data will be much faster than trying to 
query your raw data for summaries every time you need them.


You may want to create the same set of tables in separate databases 
organized by sport. One DB for baseball, one for football, one for 
basketball, etc. That would make it easier for you to move just one 
shard of your entire data set to a new bigger server if the need arises. 
The problem with that design is that if you wanted to see a complete 
report for each player, then you have to query as many separate tables 
as you have sports (because each part of that player's history would be 
in a separate database).



If your MySQL instance is going to be acting as the back end to a web 
application, then you would probably want to split the tables into 
databases based on their function in your program: one database for your 
program's settings (users/accounts/access control, user options, user 
preferences,...) and a different database just for the statistical data.



A "database" is just a logically grouped set of tables. What is meant by 
"logic" in that previous sentence varies widely between each situation.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



When to create a new database

2015-10-10 Thread Richard Reina
If I were keeping tract of high school sports statistics and thus designed
the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a separate
database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different databases?

Thanks


Re: When to create a new database

2015-10-10 Thread Ron Piggott


I would lean towards keeping it all together because of the speed 
decrease between connecting to different databases.


What I would tend to do is put some type of prefix that would keep the 
sets of tables together --- like


lib_sports
lib_rules
lib_statistical
lib_definitions

data_players
data_teams
data_games

Ron

On 10/10/15 10:28, Richard Reina wrote:

If I were keeping tract of high school sports statistics and thus designed
the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a separate
database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different databases?

Thanks




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: When to create a new database

2015-10-10 Thread Reindl Harald


Am 10.10.2015 um 16:28 schrieb Richard Reina:

If I were keeping tract of high school sports statistics and thus designed
the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a separate
database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different databases?


it makes zero sense since you can use different users for the same 
database down to table and even column permissions


with default (crap) settings innodb anyways stores all in the same big 
file, and file_per_table is, well, per table




signature.asc
Description: OpenPGP digital signature


Re: When to create a new database

2015-10-10 Thread Mogens Melander
When I read the OP I was thinking: This is one for Reindl. And here we 
go.


When dealing with data of this specific kind, you most definitely
would want a date reference. A very small computer will be able to 
handle

mane years of all kinds of weird sports statistics.

You need to define the goal you are looking for, and then ask the 
question.


On 2015-10-10 21:48, Reindl Harald wrote:

Am 10.10.2015 um 16:28 schrieb Richard Reina:
If I were keeping tract of high school sports statistics and thus 
designed

the following tables:

sports, rules, statistical definitions

and

players, teams, games

Would it be a good or bad idea to put the first set of tables in a 
separate

database called "library" since they are used for reference and rarely
change? What would be the pros and cons of using two different 
databases?


it makes zero sense since you can use different users for the same
database down to table and even column permissions

with default (crap) settings innodb anyways stores all in the same big
file, and file_per_table is, well, per table


--
Mogens
+66 8701 33224


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Upscene releases Database Workbench 5.1.6

2015-10-09 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.1.6

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

“ Database Workbench 5.1.6 "

This version is the next big release after version 5 and includes new features,
enhancements and fixes.

The full version 5.1.x change log is available here:
http://www.upscene.com/go/?go=tracker=5.1.x=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.


Version 5.1 includes SQL Azure and Firebird 3 support, additional data export 
functionality, improved Oracle XML and Object-type support, Diagramming 
enhancements 
and new printing features as well as improvements in other areas.



Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server, 
SQL Azure,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import & export, database migration, database compare and numerous other 
tools.

About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Upscene releases Database Workbench 5.1.2

2015-07-15 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

“ Database Workbench 5.1.2 

This version is the next big release after version 5 and includes new features,
enhancements and fixes.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.1.2id=12
http://www.upscene.com/go/?go=trackerv=5.1.0id=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.


Version 5.1 includes SQL Azure and Firebird 3 support, additional data export 
functionality, improved Oracle XML and Object-type support, Diagramming 
enhancements 
and new printing features as well as improvements in other areas.



Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server, 
SQL Azure,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Upscene releases Database Workbench 5.1.0

2015-06-24 Thread Martijn Tonies (Upscene Productions)
For immediate release:
http://www.upscene.com/news/item/20150624

Upscene releases Database Workbench 5.1.0

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

“ Database Workbench 5.1 

This version is the next big release after version 5 and includes new features,
enhancements and fixes.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.1.0id=12


Version 5 added numerous new features and improvements to existing tools and 
the new, more consistent user interface is better than ever and works fine 
under Wine on Linux.


Version 5.1 includes additional data export functionality, Firebird 3 support,
improved Oracle XML and Object-type support, Diagramming enhancements and new 
printing features as well as improvements in other areas.



Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Meet the MicroOLAP Database Designer for MySQL 2.1.5

2015-06-04 Thread Aleksander Andreev
This release improves logging facility, adds support for invocation views
options, enhances control on views display preferences, fixes some rare
bugs.

Full changelog:
[!] Error logging improved a lot
[+] Show object icons option added to Display Preferences
[+] With Check view option support added
[+] Algorithm view option support added
[+] Attributes option added to the View tab of Display Preferences
[+] Comments option added to the View tab of Display Preferences
[+] SQL Security view option support added
[+] Targets option added to the View tab of Display Preferences
[*] Add object menu item is available for the Object Tree View
[*] Handling of incorrect file paths improved in Generate Database
[*] Multiple triggers are permitted for the same Time and Event combination
as of MySQL 5.7.2
[*] Trigger name generation simplified in Table Editor
[-] Apply button is sometimes available even if no changes made in the
object editor bug fixed
[-] Cannot open model (.mdd) file in Explorer if application already
launched bug fixed

You're welcome to download the Database Designer for MySQL 2.1.5 right now
at:
http://microolap.com/products/database/mysql-designer/download/

Login to your private area on our site at http://microolap.com/my/keys/ to
obtain your key if you have a license.

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at http://www.microolap.com/support/

-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


Re: Dumping database names from bash with exclusion

2015-04-01 Thread Tim Johnson
* h...@tbbs.net h...@tbbs.net [150401 15:22]:
 On 2015/04/01 16:09, Tim Johnson wrote:
 Using Mysql 5 on darwin (OS x).
 This command
 SELECT schema_name FROM information_schema.schemata WHERE
 schema_name NOT IN
 ('mysql','information_schema','performance_schema');
 as executed from the mysql prompt gives me a dump of all databases
 except those not included in the tuple.
 
 This command
 mysql -uroot -p** -e SELECT schema_name FROM
 information_schema.schemata WHERE schema_name NOT IN
 ('mysql','information_schema','performance_schema')
 
 gives me a a dump of the entire mysql help screen, _not_ the
 databases I am after.
 
 Interesting ... when I try it, I get the output that, I suspect, you 
 want. Since mine is Windows cmd-line, I put it all on one line, but I 
 believe that all Unix-likes let one continue a string until finished.
 When I break the e-string off, I get a syntax error, as if entered 
 from the MySQL command prompt, semicolon too soon. I cannot get the 
 output that you describe unless I slip question-mark in.
  Thanks for the reply.

  I haven't gotten any simple -e or --execute= options to work on my
  Mac. I've got a dual-boot setup, so booted into ubuntu 14.04 and
  -e'command' works for any test that I tried. 

  I'd chalk it up to version or an OS difference, and my need is
  easy to be met with another approach.
 
  Cheers
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Dumping database names from bash with exclusion

2015-04-01 Thread Tim Johnson
Using Mysql 5 on darwin (OS x).
This command
SELECT schema_name FROM information_schema.schemata WHERE
schema_name NOT IN
('mysql','information_schema','performance_schema');
as executed from the mysql prompt gives me a dump of all databases
except those not included in the tuple.

This command
mysql -uroot -p** -e SELECT schema_name FROM
information_schema.schemata WHERE schema_name NOT IN
('mysql','information_schema','performance_schema')

gives me a a dump of the entire mysql help screen, _not_ the
databases I am after.

I could use some help on the correct syntax.

thanks
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Dumping database names from bash with exclusion

2015-04-01 Thread hsv

On 2015/04/01 16:09, Tim Johnson wrote:

Using Mysql 5 on darwin (OS x).
This command
SELECT schema_name FROM information_schema.schemata WHERE
schema_name NOT IN
('mysql','information_schema','performance_schema');
as executed from the mysql prompt gives me a dump of all databases
except those not included in the tuple.

This command
mysql -uroot -p** -e SELECT schema_name FROM
information_schema.schemata WHERE schema_name NOT IN
('mysql','information_schema','performance_schema')

gives me a a dump of the entire mysql help screen, _not_ the
databases I am after.


Interesting ... when I try it, I get the output that, I suspect, you 
want. Since mine is Windows cmd-line, I put it all on one line, but I 
believe that all Unix-likes let one continue a string until finished.
When I break the e-string off, I get a syntax error, as if entered 
from the MySQL command prompt, semicolon too soon. I cannot get the 
output that you describe unless I slip question-mark in.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Upscene releases Database Workbench 5.0.10

2015-02-20 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

 Database Workbench 5 

This release fixes several issues as reported by our customers.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.0.10id=12


Version 5 added numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Database Designer for MySQL 2.1.4 is out!

2015-02-05 Thread Aleksander Andreev
This release provide support for latest character sets and collations,
support for fractional seconds in temporal data types, ON UPDATE clause for
DATETIME data type.

Full change log:

[!] Support for Fractional Seconds in temporal data types added
[+] ON UPDATE clause support added for DATETIME field type
[+] utf8mb4 character set support added
[+] utf8mb4_xxx_ci collations support added
[+] XXX_general_mysql500_ci collations support added for utf8, utf8mb4,
utf16, utf32 and ucs2 character sets
[*] Length option for columns is limited depending on field type in the
Table Editor

You're welcome to download the Database Designer for MySQL 2.1.4 right now
at:
http://microolap.com/products/database/mysql-designer/download/

Login to your private area on our site at http://microolap.com/my/keys/ to
obtain your key if you have a license.

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at
http://www.microolap.com/support/


-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


ANN: Upscene releases Database Workbench 5.0.8

2015-01-21 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.0.8

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

 Database Workbench 5 

This release fixes several issues as reported by our customers.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.0.8id=12


Version 5 added numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

ANN: Database Workbench 5.0.6 released

2014-12-15 Thread Martijn Tonies (Upscene Productions)
ANN: Upscene releases Database Workbench 5.0.6

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

 Database Workbench 5 

Version 5.0.6 fixes an issue with the MySQL module and includes some changes
for Firebird 3.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.0.6id=12


Version 5 added numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.


With regards,

Martijn Tonies


ANN: Database Designer for MySQL 2.1.3 released!

2014-10-28 Thread Aleksander Andreev
This release introduces enhanced Reverse Engineering dialog, options to
prevent annoying warning messages, full support for modern table row
formats, support for BLACKHOLE storage engine.

Full change log:
[+] REDUNDANT and COMPACT row formats support is added
[+] BLACKHOLE storage engine support is added
[+] Show warning before object deletion option added to Environment
Options
[+] Show warning before editor closing option added to Environment Options
[*] Reverse Engineering dialog vastly improved
[*] Tables in a diagram row option removed from Reverse Engineering
dialog as deprecated

You're welcome to download the Database Designer for MySQL 2.1.3 right now
at:
http://microolap.com/products/database/mysql-designer/download/

Login to your private area on our site at http://microolap.com/my/keys/ to
obtain your key if you have a license.

Please don't hesitate to ask any questions or report bugs with our
Support Ticketing system available at http://www.microolap.com/support/

-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


ANN: Upscene releases Database Workbench 5.0.4

2014-10-16 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5.0.4

Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

“ Database Workbench 5 

Version 5.0.4 introduces some small new features and a list of fixes
base on user feedback.

The change log is available here:
http://www.upscene.com/go/?go=trackerv=5.0.4id=12


Version 5 added numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!

For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it. 

I have one database on the ubuntu partition that was not backed up. 

I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
thus I can read and copy the mysql data files at /var/lib/mysql on the
ubuntu partition.

I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions.

So, this is a Help me before I hurt myself sort of question: Are
there any caveats and gotchas to consider?

thanks
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Jan Steinman
 So, this is a Help me before I hurt myself sort of question: Are
 there any caveats and gotchas to consider?

Do you know if the database was shut down properly? Or did Ubunto crash and die 
and your partition become unbootable while the database was in active use?

Either way, you need to make sure MySQL is shut down when you move the files, 
and then repair them after starting.

I've had good experiences moving MyISAM files that way, but bad experience 
moving INNODB files. I suspect the latter are more aggressively cached.

 Mass media must constantly manipulate and deceive us in order to sell 
products... The most fundamental deception perpetrated on the public is that 
consumption of material goods is the source of human happiness. A secondary 
deception is hiding the fact that such consumption leads to major collateral 
damage -- the possible end of human life on the planet. -- Pat Murphy
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Reindl Harald


Am 05.10.2014 um 21:29 schrieb Tim Johnson:

I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it.

I have one database on the ubuntu partition that was not backed up.

I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
thus I can read and copy the mysql data files at /var/lib/mysql on the
ubuntu partition.

I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions.

So, this is a Help me before I hurt myself sort of question: Are
there any caveats and gotchas to consider?


in case of MyISAM a no-brainer

* stop the db server
* copy the folder there
* set permissions
* start the server
* run mysql_upgrade --force -u root -p

well, in case of replication you might want to rebuild the slave(s) from 
scratch but that was it - doing this regulary to rsync whole databases 
from stopped replication slaves as well as the document root on testing 
machines




signature.asc
Description: OpenPGP digital signature


Re: Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
* Jan Steinman j...@ecoreality.org [141005 13:12]:
  So, this is a Help me before I hurt myself sort of question: Are
  there any caveats and gotchas to consider?
 
 Do you know if the database was shut down properly? Or did Ubunto
 crash and die and your partition become unbootable while the
 database was in active use?
  The database had been shut down, no symptom occurred when the OS
  was booted, I just couldn't reboot (for starters) 

 Either way, you need to make sure MySQL is shut down when you move the files, 
 and then repair them after starting.
  Good tip. 

 I've had good experiences moving MyISAM files that way, but bad
 experience moving INNODB files. I suspect the latter are more
 aggressively cached.
  They are MyISAM ... 
  Thank you
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Tim Johnson
* Reindl Harald h.rei...@thelounge.net [141005 13:12]:
 
 Am 05.10.2014 um 21:29 schrieb Tim Johnson:
 I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
 ubuntu system has failed and I am unable to boot it.
 
 I have one database on the ubuntu partition that was not backed up.
 
 I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
 thus I can read and copy the mysql data files at /var/lib/mysql on the
 ubuntu partition.
 
 I presume that I should be able to retrieve the database by just
 copying it to /opt/local/var/db/mysql5 - the location of the mysql
 datafiles on the mac partition - and setting ownership and
 permissions.
 
 So, this is a Help me before I hurt myself sort of question: Are
 there any caveats and gotchas to consider?
 
 in case of MyISAM a no-brainer
  Yup. MyISAM ... 
 * stop the db server
 * copy the folder there
 * set permissions
 * start the server
 * run mysql_upgrade --force -u root -p
 
  Great! thanks for the detail

 well, in case of replication you might want to rebuild the slave(s) from 
 scratch but that was it - doing this regulary to rsync whole databases 
 from stopped replication slaves as well as the document root on testing 
 machines
 Got it. 

Thank you


-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Importing a database as a data file directory

2014-10-05 Thread Reindl Harald


Am 05.10.2014 um 22:39 schrieb Jan Steinman:

I've had good experiences moving MyISAM files that way, but bad experience
moving  INNODB files. I suspect the latter are more aggressively cached


simply no, no and no again

independent of innodb_file_per_table = 1 there is *always* a global
table-space (ibdata1) and you just can't move around innodb databases
on file-system level - there is not but and if and it has nothing to
do with caching

if caching would matter in that context it just would not be part of
the game in case off a not running service

http://www.xaprb.com/blog/2012/09/07/measuring-free-space-in-innodbs-global-tablespace/



signature.asc
Description: OpenPGP digital signature


ANN: Upscene releases Database Workbench 5.0.2

2014-09-18 Thread Martijn Tonies (Upscene Productions)
Upscene Productions is proud to announce a new release 
of the popular multi-DBMS development tool:

 Database Workbench 5 

Version 5.0.2 fixed some issues that came to light after the initial
major version 5 release.

http://www.upscene.com/news/item/20140918

Version 5 added numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



A few bugs popped up after the initial version 5 release, all minor issues but 
also a lack of documentation 
showing in the trial version. I'm proud of the team that thoroughly tested the 
version 5 release, 
says Martijn Tonies, founder of Upscene Productions. The major version 5 
release adds adds code editor features, 
has diagramming improvements, comes in multiple editions, is fully HiDPI aware 
and offers tunnelling for MySQL 
and MariaDB connections. It all was a lot of work, but it was worth it!

We worked closely with out customer and implemented many of their requests and 
for
new users, we offer multiple editions to suit their development needs. From 
design to
productivity, there's new features and improvements in almost everything!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.

It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Re: database developer tool Database Workbench 5 now available

2014-09-05 Thread Martijn Tonies (Upscene Productions)

Hello Jan, list,


From: Martijn Tonies (Upscene Productions) m.ton...@upscene.com

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!


Unless you don't do Winblows.

Please put Windows dependency clearly in your announcements and on your 
website. I couldn't find it anywhere, until I attempted a download, and got 
a useless .EXE file.


The announcement said:

 ... and works fine under Wine on Linux. 

Several of our customers are long-time Linux users and are very happy with
Database Workbench.

See also:
http://www.upscene.com/company/support/database_workbench_5_on_wine_ubuntu14

Hope this helps.


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird!





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: database developer tool Database Workbench 5 now available

2014-09-05 Thread Martin Gainty


 From: m.ton...@upscene.com
 To: j...@ecoreality.org; mysql@lists.mysql.com
 Subject: Re: database developer tool Database Workbench 5 now available
 Date: Fri, 5 Sep 2014 07:57:37 +0200
 
 Hello Jan, list,
 
  From: Martijn Tonies (Upscene Productions) m.ton...@upscene.com
 
  Database Workbench now comes in multiple editions with different
  pricing models, there's always a version that suits you!
 
 Unless you don't do Winblows.
 
 Please put Windows dependency clearly in your announcements and on your 
 website. I couldn't find it anywhere, until I attempted a download, and got 
 a useless .EXE file.
 
MGMy development is identical ..Development on Windows..Production on Linux
MGWhen google routes me to Vladimir Putins site for Official Windows Mysql 
Stack you download god knows what
MGsolution is to give the customer an in-between solution such as mysqld, 
mysqladmin and mysql shell scripts that will work under Windows cygwin
MGThe Database Workbench version5 MySQL plugin README should detail *a 
seamless installation* for Windows cygwin
MGif I have to make changes to my.ini or etc/init.d the readme should be 
specific on what those changes should be
MGReadme should also be specific on how to successfully test mysql ports 
(presumably 3306)..i assume netstat -a | grep 3306?
MGThanks
 
 The announcement said:
 
  ... and works fine under Wine on Linux. 
 
 Several of our customers are long-time Linux users and are very happy with
 Database Workbench.
 
 See also:
 http://www.upscene.com/company/support/database_workbench_5_on_wine_ubuntu14
 
 Hope this helps.
 
 
 With regards,
 
 Martijn Tonies
 Upscene Productions
 http://www.upscene.com
 
 Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
 Anywhere, MySQL, InterBase, NexusDB and Firebird!
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql
 
  

Re: database developer tool Database Workbench 5 now available

2014-09-05 Thread Martijn Tonies (Upscene Productions)




MGMy development is identical ..Development on Windows..Production on Linux
MGWhen google routes me to Vladimir Putins site for Official Windows Mysql 
Stack you download god knows what
MGsolution is to give the customer an in-between solution such as mysqld, 
mysqladmin and mysql shell scripts that will work under Windows cygwin
MGThe Database Workbench version5 MySQL plugin README should detail *a 
seamless installation* for Windows cygwin
MGif I have to make changes to my.ini or etc/init.d the readme should be 
specific on what those changes should be
MGReadme should also be specific on how to successfully test mysql ports 
(presumably 3306)..i assume netstat -a | grep 3306?

MGThanks

Martin,

I have no idea what the above means --

Database Workbench is a client side Windows based database design and 
development tool,

no need to Cygwin or modify my.ini


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com

Download Database Workbench for Oracle, MS SQL Server, Sybase SQL
Anywhere, MySQL, InterBase, NexusDB and Firebird! 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: database developer tool Database Workbench 5 now available

2014-09-04 Thread Martijn Tonies (Upscene Productions)
Upscene releases Database Workbench 5

Upscene Productions is proud to announce the availability of
the next major version of the popular multi-DBMS development tool:

 Database Workbench 5 

There have been numerous improvements to existing tools and the new, more
consistent user interface is better than ever and works fine under Wine on 
Linux.

Database Workbench now comes in multiple editions with different
pricing models, there's always a version that suits you!



There is so much new in Database Workbench 5, I don't know where to start, 
says Martijn Tonies,
founder of Upscene Productions. This release adds code editor features, has 
diagramming improvements,
comes in multiple editions, is fully HiDPI aware and offers tunneling for MySQL 
and MariaDB connections.
It all was a lot of work, but it was worth it!

We worked closely with out customer and implemented many of their requests and 
for
new users, we offer multiple editions to suit their development needs. From 
design to
productivity, there's new features and improvements in almost everything!



For more information, see What's new in Database Workbench 5?
( http://www.upscene.com/database_workbench/whatsnew )


Database Workbench supports MySQL, MariaDB, Firebird, Oracle, MS SQL Server,
SQL Anywhere, NexusDB and InterBase, comes in multiple editions and is licensed 
based on
selectable modules.


It includes tools for database design, database maintenance, testing, data 
transfer,
data import  export, database migration, database compare and numerous other 
tools.


About Database Workbench
Database Workbench is a database developer tool, over 10 years in the making and
is being used by thousands of developers across the globe who have come to rely 
on it
every day. From database design, implementation, to testing and debugging, it 
will aid you 
in your daily database work.

About Upscene Productions
Based in The Netherlands, Europe, this small but dedicated company has been 
providing
database developers with useful tools for over 12 years. Slowly expanding the 
product portfolio
and gaining recognition amongst InterBase and Firebird database developers, 
they now offer
tools for a whole range of database systems, including Oracle and Microsoft SQL 
Server.

Re: ANN: database developer tool Database Workbench 5 now available

2014-09-04 Thread Jan Steinman
 From: Martijn Tonies (Upscene Productions) m.ton...@upscene.com
 
 Database Workbench now comes in multiple editions with different
 pricing models, there's always a version that suits you!

Unless you don't do Winblows.

Please put Windows dependency clearly in your announcements and on your 
website. I couldn't find it anywhere, until I attempted a download, and got a 
useless .EXE file.

 The record is clear that left to their own devices, the automobile 
manufacturers lack the wisdom or the will or both to switch decisively to the 
production of inexpensive, compact, energy-saving cars appropriate to our 
present needs. -- Donald E. Weeden
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: Database Workbench 4.4.7, the multi-DBMS IDE now available!

2014-06-17 Thread Martijn Tonies (Upscene Productions)
ANN: Database Workbench 4.4.7, the multi-DBMS IDE now available!

Ladies, gentlemen,

Upscene Productions is proud to announce the next version 
of the popular Windows-based multi-DBMS development tool:

 Database Workbench 4.4.7 Pro 


For more information, see http://www.upscene.com/go/?go=newsid=20140617



Database Workbench supports:
- Borland InterBase ( 6.x and up )
- Firebird ( 1.x and up )
- MS SQL Server/MSDE ( 7 and up )
- MySQL 4.x and up
- Oracle Database ( 8i and up )
- Sybase SQL Anywhere ( 9 and up )
- NexusDB ( 3.0 and up )


Thank you for your support,

Martijn Tonies
Database Workbench - the database developer tool for professionals
Upscene Productions
http://www.upscene.com

ANN: Database Workbench 4.4.6, the multi-DBMS IDE now available!

2014-04-29 Thread Martijn Tonies (Upscene Productions)
ANN: Database Workbench 4.4.6, the multi-DBMS IDE now available!

Ladies, gentlemen,

Upscene Productions is proud to announce the next version 
of the popular Windows-based multi-DBMS development tool:

 Database Workbench 4.4.6 Pro 

For more information, see http://www.upscene.com/go/?go=newsid=20140429


The FREE Lite Editions will be released at a later stage.



Database Workbench supports:
- Borland InterBase ( 6.x and up )
- Firebird ( 1.x and up )
- MS SQL Server/MSDE ( 7 and up )
- MySQL 4.x and up
- Oracle Database ( 8i and up )
- Sybase SQL Anywhere ( 9 and up )
- NexusDB ( 3.0 and up )






Thank you for your support,

Martijn Tonies
Database Workbench - the database developer tool for professionals
Upscene Productions
http://www.upscene.com

Re: Excluding MySQL database tables from mysqldump

2014-04-08 Thread hsv
 2014/04/07 08:02 -0800, Tim Johnson 
  2)mysqldump forces all database names to lower case in the CREATE
  DATABASE statement. I know, one shouldn't use upper case in
  database names, but :) tell that to my clients. 

Why not? That is not mentioned in the section devoted to mapping such names to 
the file-system.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-08 Thread Tim Johnson
* h...@tbbs.net h...@tbbs.net [140407 23:09]:
  2014/04/07 08:02 -0800, Tim Johnson 
   2)mysqldump forces all database names to lower case in the CREATE
   DATABASE statement. I know, one shouldn't use upper case in
   database names, but :) tell that to my clients. 
 
 Why not? That is not mentioned in the section devoted to mapping such names 
 to the file-system.
  I found 'official' documentation here regarding Mac OS X:
  https://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

  I had also found some reference to this having been a side effect
  of migrating files from older macs (of which I am not familiar)
  filesystems. 

  I don't find any reference in the mysqldump documentation at
  https://dev.mysql.com/doc/refman/5.1/en/mysqldump.html to any
  mechanism for overriding this.

  The incompatibility kicks in when trying to restore the databases
  on linux - and I presume FreeBSD, sun OS and other posix systems
  would show the same problem.

  Live and learn ...
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Calling function, that operates on another database

2014-04-08 Thread bars0.bars0.bars0

Hi all.

I have standard select statement and on one column I want to run 
function, that will connect to another database (same server).

Is this possible?

High level example:

SELECT db1.clients.client_id, getTurnover(db1.clients.client_id) FROM 
db1.clients;


AND getTurnover($id) body would be something like:

SELECT SUM(db2.turnover.amount) FROM db2.turnover WHERE 
db2.turnover.client_id = $id;



So for some data, I need to make lookup to another database table.
Is this even possible?


Cheers, Chris.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Calling function, that operates on another database

2014-04-08 Thread David Lerer
Chris, take a look at Federated tables 
https://dev.mysql.com/doc/refman/5.5/en/federated-storage-engine.html
No, it is not as easy as Oracle's dblinks.
David.


David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

-Original Message-
From: bars0.bars0.bars0 [mailto:bars0.bars0.ba...@gmail.com]
Sent: Tuesday, April 08, 2014 4:16 PM
To: mysql@lists.mysql.com
Subject: Calling function, that operates on another database

Hi all.

I have standard select statement and on one column I want to run
function, that will connect to another database (same server).
Is this possible?

High level example:

SELECT db1.clients.client_id, getTurnover(db1.clients.client_id) FROM
db1.clients;

AND getTurnover($id) body would be something like:

SELECT SUM(db2.turnover.amount) FROM db2.turnover WHERE
db2.turnover.client_id = $id;


So for some data, I need to make lookup to another database table.
Is this even possible?


Cheers, Chris.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.


Re: Calling function, that operates on another database

2014-04-08 Thread Larry Martell
On Tue, Apr 8, 2014 at 4:15 PM, bars0.bars0.bars0
bars0.bars0.ba...@gmail.com wrote:
 Hi all.

 I have standard select statement and on one column I want to run function,
 that will connect to another database (same server).
 Is this possible?

 High level example:

 SELECT db1.clients.client_id, getTurnover(db1.clients.client_id) FROM
 db1.clients;

 AND getTurnover($id) body would be something like:

 SELECT SUM(db2.turnover.amount) FROM db2.turnover WHERE
 db2.turnover.client_id = $id;


 So for some data, I need to make lookup to another database table.
 Is this even possible?

Yes, using just the syntax you have: db.table

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-07 Thread shawn l.green

Hello Tim,

On 4/4/2014 10:27 PM, Tim Johnson wrote:

* Tim Johnson t...@akwebsoft.com [140404 17:46]:

Currently I'm running mysql on a Mac OSX partition.

I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS.

In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.

I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?

thoughts? Opinions?
thanks

   I should add the following:

   1)the only user added to the new partition is the same as the
   primary non-root user on the Mac partition. Same credentials

   2)this is a workstation - it is closed to the outside world.

   FYI: ...



There are several ways to select which data you want in the backup. You 
can backup per-table, per-database, object type per database (routines, 
triggers), or global objects (events). What level of detail you want to 
copy from your old instance into your new instance is completely up to you.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: database not listed

2014-04-07 Thread shawn l.green

Hello Nicu,

On 4/6/2014 1:51 AM, Nicolae Marasoiu wrote:

Hi,

A directory in datadir does not show up as database. Please help!

drwx-- 2 mysql root 24576 oct  9 00:34 *opendental*
drwx-- 2 mysql mysql 4096 mar 22 19:54 performance_schema
drwx-- 2 mysql root  4096 mar 22 19:53 test


mysql show databases
 - \g
++
| Database   |
++
| information_schema |
| test   |
++
  \
Thanks
Nicu



At first glace, it appears that you have not copied the old 'mysql' 
database folder from your old --datadir location to the new one. MySQL 
only knows about the one active folder you have defined in your 
configuration file. It does not remember any older settings where you 
want your databases to be stored.


Remember to make the new files in the new data directory owned by the 
current mysql user and group.


Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-07 Thread Tim Johnson
* shawn l.green shawn.l.gr...@oracle.com [140407 07:05]:
 Hello Tim,
 
 On 4/4/2014 10:27 PM, Tim Johnson wrote:
 * Tim Johnson t...@akwebsoft.com [140404 17:46]:
 Currently I'm running mysql on a Mac OSX partition.
 
 I have installed an ubuntu dual-booted partition and put mysql on
 it. I have already set up a mysql user on the ubuntu OS.
 
 In the past I have used mysqldump with just the --all-databases
 option to transfer data across different linux partitions.
 
 I'm wondering if I should explicitly exclude some of the tables from
 the mysql database. If so, which? perhaps mysql.user?
 
 thoughts? Opinions?
 thanks
I should add the following:
 
1)the only user added to the new partition is the same as the
primary non-root user on the Mac partition. Same credentials
 
2)this is a workstation - it is closed to the outside world.
 
FYI: ...
 
 
 There are several ways to select which data you want in the backup. You 
 can backup per-table, per-database, object type per database (routines, 
 triggers), or global objects (events). What level of detail you want to 
 copy from your old instance into your new instance is completely up to you.

  I've run into other problems, such as a 

  1)running mysqldump exactly as I would have in linux and not
  getting all databases. Dunno why, but keep on reading.

  2)mysqldump forces all database names to lower case in the CREATE
  DATABASE statement. I know, one shouldn't use upper case in
  database names, but :) tell that to my clients.

  It turns out '2)' is a known problem in Mac, but I just didn't
  know it...

  My workaround was to write a python app that uses the MySQLdb
  module to get the name of all databases, iterate through the list
  and selectively operate on them, and ensure that proper case is
  used in the CREATE DATABASE command..

  So I'm good here, I think.
  Thanks much for the reply.

-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



database not listed

2014-04-05 Thread Nicolae Marasoiu
Hi,

A directory in datadir does not show up as database. Please help!

drwx-- 2 mysql root 24576 oct  9 00:34 *opendental*
drwx-- 2 mysql mysql 4096 mar 22 19:54 performance_schema
drwx-- 2 mysql root  4096 mar 22 19:53 test


mysql show databases
- \g
++
| Database   |
++
| information_schema |
| test   |
++
 \
Thanks
Nicu


Excluding MySQL database tables from mysqldump

2014-04-04 Thread Tim Johnson
Currently I'm running mysql on a Mac OSX partition. 

I have installed an ubuntu dual-booted partition and put mysql on
it. I have already set up a mysql user on the ubuntu OS. 

In the past I have used mysqldump with just the --all-databases
option to transfer data across different linux partitions.

I'm wondering if I should explicitly exclude some of the tables from
the mysql database. If so, which? perhaps mysql.user?

thoughts? Opinions?
thanks
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Excluding MySQL database tables from mysqldump

2014-04-04 Thread Tim Johnson
* Tim Johnson t...@akwebsoft.com [140404 17:46]:
 Currently I'm running mysql on a Mac OSX partition. 
 
 I have installed an ubuntu dual-booted partition and put mysql on
 it. I have already set up a mysql user on the ubuntu OS. 
 
 In the past I have used mysqldump with just the --all-databases
 option to transfer data across different linux partitions.
 
 I'm wondering if I should explicitly exclude some of the tables from
 the mysql database. If so, which? perhaps mysql.user?
 
 thoughts? Opinions?
 thanks
  I should add the following:

  1)the only user added to the new partition is the same as the
  primary non-root user on the Mac partition. Same credentials

  2)this is a workstation - it is closed to the outside world.

  FYI: ...
-- 
Tim 
tim at tee jay forty nine dot com or akwebsoft dot com
http://www.akwebsoft.com, http://www.tj49.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Database migration from default configuration to innodb_file_per_table

2014-03-26 Thread Christophe
Hi,

Data migration made last night.

Use of a new MySQL instance has been quite useful to operate, the outage
was about 7 minutes.
ibdata1 is now using 58 MiB of disk space ! each table having a proper
.ibd file and full data directory from 13GiB to 3.2GiB ...

Server load divided by 3 ~ 4 ;).

I think operation is successful ;) .

Thanks Shawn !

Regards,
Christophe.



Le 23/03/2014 18:49, Christophe a écrit :
 Hi Shawn, and thanks for this concise anwser ;) .
 
 Le 22/03/2014 05:35, shawn l.green a écrit :

 The system is operating exactly as designed. The ibdata* file(s) contain
 more than just your data and indexes. This is the common tablespace and
 it contains all the metadata necessary to identify where *all* your
 InnoDB tables actually are (where they are in a tablespace and which
 tablespace they are in) and several other things about them. In the
 terms of the InnoDB developers, this is the data dictionary.  This
 means that once you blow it away, MySQL has no details about any where
 any of your InnoDB tables are, exactly as the message says.

 The table names are visible in a SHOW TABLES command because that is
 essentially performing a directory listing of any .FRM files in that
 database's folder. 
 
 It's good to know, and explains why I got this behaviour while my lab
 run ...
 
 Without both parts (the definition in the .FRM file
 and the metadata in the common tablespace) your tables are broken. If
 you have the .frm file, you can find out which columns you have defined,
 what data types they are, if the table is partitioned or not and what
 your indexes and other constraints look like. The .frm file cannot tell
 the InnoDB engine which tablespace a table is in or what offset the root
 page of the table is within the tablespace. That information was stored
 in the ibdata file that you erased during your test run.

 The proper way to change the size of your common tablespace is
 documented here in the user manual
 http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

 Search for the section header Decreasing the Size of the InnoDB
 Tablespace

 
 It's just what I wanted to avoid :( ... but thanks for the link ;).
 
 As a Workaround, I think I will run a second MySQL instance during task,
 and make a binary copy of files after making sure they are good.
 
 Best thanks and Regards,
 Christophe.
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Database migration from default configuration to innodb_file_per_table

2014-03-23 Thread Christophe
Hi Shawn, and thanks for this concise anwser ;) .

Le 22/03/2014 05:35, shawn l.green a écrit :

 The system is operating exactly as designed. The ibdata* file(s) contain
 more than just your data and indexes. This is the common tablespace and
 it contains all the metadata necessary to identify where *all* your
 InnoDB tables actually are (where they are in a tablespace and which
 tablespace they are in) and several other things about them. In the
 terms of the InnoDB developers, this is the data dictionary.  This
 means that once you blow it away, MySQL has no details about any where
 any of your InnoDB tables are, exactly as the message says.
 
 The table names are visible in a SHOW TABLES command because that is
 essentially performing a directory listing of any .FRM files in that
 database's folder. 

It's good to know, and explains why I got this behaviour while my lab
run ...

 Without both parts (the definition in the .FRM file
 and the metadata in the common tablespace) your tables are broken. If
 you have the .frm file, you can find out which columns you have defined,
 what data types they are, if the table is partitioned or not and what
 your indexes and other constraints look like. The .frm file cannot tell
 the InnoDB engine which tablespace a table is in or what offset the root
 page of the table is within the tablespace. That information was stored
 in the ibdata file that you erased during your test run.
 
 The proper way to change the size of your common tablespace is
 documented here in the user manual
 http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html
 
 Search for the section header Decreasing the Size of the InnoDB
 Tablespace
 

It's just what I wanted to avoid :( ... but thanks for the link ;).

As a Workaround, I think I will run a second MySQL instance during task,
and make a binary copy of files after making sure they are good.

Best thanks and Regards,
Christophe.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Locking a Database (not tables) x

2014-03-23 Thread David Lerer
Thanks Shawn, This may work for us with some script changes. We'll take a look.

By the way, too bad we cannot rename a database, or can we?
See http://dev.mysql.com/doc/refman/5.1/en/rename-database.html about removal 
of a dangerous RENMAE DATABASE statement...

David.


David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

-Original Message-
From: shawn l.green [mailto:shawn.l.gr...@oracle.com]
Sent: Friday, March 21, 2014 3:34 PM
To: mysql@lists.mysql.com
Subject: Re: Locking a Database (not tables) x

Hi David.

On 3/21/2014 1:42 PM, David Lerer wrote:
 Frequently, we import a production dump that contains only 1 or 2 databases 
 into one of our QA instances that contains many more databases. (i.e. 
 database being a schema or a catalogue).
 At the beginning of the import script, we first drop all objects in the QA 
 database so that it will be a perfect match (object wise) to production.

 Is there an easy way to lock the whole database for the duration of the 
 import - so that no developers can update the database?
 Obviously, I can revoke permissions, but I was wondering whether there is a 
 better approach.


If you start with a DROP DATABASE   that will pretty much ensure
that nobody gets back into it.

Then re-create your tables in a new DB (yyy)

As a last set of steps do

   CREATE DATABASE 
   RENAME TABLE yyy.table1 to .table1, yyy.table2 to .table2,
  (repeat for all your tables).
   DROP DATABASE yyy


Because this is essentially a metadata flip, the RENAME will be quite
speedy.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Locking a Database (not tables) x

2014-03-23 Thread Peter Brawley

On 2014-03-23 8:02 PM, David Lerer wrote:

Thanks Shawn, This may work for us with some script changes. We'll take a look.

By the way, too bad we cannot rename a database, or can we?
See http://dev.mysql.com/doc/refman/5.1/en/rename-database.html about removal of a 
dangerous RENMAE DATABASE statement...

David.


David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

-Original Message-
From: shawn l.green [mailto:shawn.l.gr...@oracle.com]
Sent: Friday, March 21, 2014 3:34 PM
To: mysql@lists.mysql.com
Subject: Re: Locking a Database (not tables) x

Hi David.

On 3/21/2014 1:42 PM, David Lerer wrote:

Frequently, we import a production dump that contains only 1 or 2 databases into one of our QA instances 
that contains many more databases. (i.e. database being a schema or a 
catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.


If you start with a DROP DATABASE   that will pretty much ensure
that nobody gets back into it.

Then re-create your tables in a new DB (yyy)

As a last set of steps do

CREATE DATABASE 
RENAME TABLE yyy.table1 to .table1, yyy.table2 to .table2,
  (repeat for all your tables).
DROP DATABASE yyy


Remember to similarly rename other database objects to, eg sprocs  funcs.

PB

-




Because this is essentially a metadata flip, the RENAME will be quite
speedy.

--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

The information contained in this e-mail and any attached

documents may be privileged, confidential and protected from

disclosure. If you are not the intended recipient you may not

read, copy, distribute or use this information. If you have

received this communication in error, please notify the sender

immediately by replying to this message and then delete it

from your system.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Locking a Database (not tables) x

2014-03-22 Thread Manuel Arostegui
2014-03-21 18:42 GMT+01:00 David Lerer dle...@univision.net:

 Frequently, we import a production dump that contains only 1 or 2
 databases into one of our QA instances that contains many more databases.
 (i.e. database being a schema or a catalogue).
 At the beginning of the import script, we first drop all objects in the QA
 database so that it will be a perfect match (object wise) to production.

 Is there an easy way to lock the whole database for the duration of the
 import - so that no developers can update the database?
 Obviously, I can revoke permissions, but I was wondering whether there is
 a better approach.



Hello,

One more idea:

Assuming you can stop your DB - restart the database so it only listens in
the unix socket or in a different IP (an alias of your current IP could
work) and connect thru it do all your stuff and enabled it back to its
original port and IP.
Obviously I am assuming your developers connect remotely (thru port 3306 or
whichever you use).

Manuel.


Re: Locking a Database (not tables) x

2014-03-22 Thread Karr Abgarian
Perhaps enabling read only, followed by import with super user will do what you 
want.   


On Mar 22, 2014, at 12:26 AM, Manuel Arostegui man...@tuenti.com wrote:

 2014-03-21 18:42 GMT+01:00 David Lerer dle...@univision.net:
 
 Frequently, we import a production dump that contains only 1 or 2
 databases into one of our QA instances that contains many more databases.
 (i.e. database being a schema or a catalogue).
 At the beginning of the import script, we first drop all objects in the QA
 database so that it will be a perfect match (object wise) to production.
 
 Is there an easy way to lock the whole database for the duration of the
 import - so that no developers can update the database?
 Obviously, I can revoke permissions, but I was wondering whether there is
 a better approach.
 
 
 
 Hello,
 
 One more idea:
 
 Assuming you can stop your DB - restart the database so it only listens in
 the unix socket or in a different IP (an alias of your current IP could
 work) and connect thru it do all your stuff and enabled it back to its
 original port and IP.
 Obviously I am assuming your developers connect remotely (thru port 3306 or
 whichever you use).
 
 Manuel.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Locking a Database (not tables) x

2014-03-21 Thread David Lerer
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Locking a Database (not tables)

2014-03-21 Thread David Lerer
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

David Lerer | Director, Database Administration | Interactive | 605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net | 
www.univision.net

The information contained in this e-mail and any attached 
documents may be privileged, confidential and protected from 
disclosure.  If you are not the intended recipient you may not 
read, copy, distribute or use this information.  If you have 
received this communication in error, please notify the sender 
immediately by replying to this message and then delete it 
from your system.


RE: Locking a Database (not tables) x

2014-03-21 Thread David Lerer
Thanks Wayne. This a great idea to prevent user activity on the server. I’ll 
use it in the future.
But I’m looking for a way to prevent user activity on a database ((i.e. 
database  being a schema or a catalogue).
David.




David Lerer  |  Director, Database Administration  |  Interactive  |  605 Third 
Avenue, 12th Floor, New York, NY 10158
Direct: (646) 487-6522  |  Fax: (646) 487-1569  |  
dle...@univision.netmailto:dle...@univision.net  |  http://www.univision.net


[cid:1e909b.png@efba91b0.48b65711]http://www.univision.net

From: Wayne Leutwyler [mailto:wleut...@columbus.rr.com]
Sent: Friday, March 21, 2014 2:12 PM
To: David Lerer
Subject: Re: Locking a Database (not tables) x

You could set max_connections = 0; then kill off any remaining connections. Do 
your data load and then set you max_connections back to what it was prior.
show variables like ‘max_connections’; (note this number)
set global max_connections = 0
This will leave 1 connection open for a superuser, I dont know what ID you use 
for that a lot of people use root.
Now import your data.
Once the import is done set global max_connections back to what it was.

On Mar 21, 2014, at 1:42 PM, David Lerer 
dle...@univision.netmailto:dle...@univision.net wrote:
Frequently, we import a production dump that contains only 1 or 2 databases 
into one of our QA instances that contains many more databases. (i.e. 
database being a schema or a catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.

Thanks, David.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


Walter Wayne Leutwyler, RHCT
Sr. MySQL Database Administrator
Mobile: 614 519 5672
Office: 614 889 4956
E-mail: wayne.leutwy...@gmail.commailto:wayne.leutwy...@gmail.com
E-mail: wleut...@columbus.rr.commailto:wleut...@columbus.rr.com
Website: http://penguin-workshop.dyndns.org

Courage is being scared to death, but saddling up anyway. --John Wayne

The information contained in this e-mail and any attached 

documents may be privileged, confidential and protected from 

disclosure. If you are not the intended recipient you may not 

read, copy, distribute or use this information. If you have 

received this communication in error, please notify the sender 

immediately by replying to this message and then delete it 

from your system.


Re: Locking a Database (not tables) x

2014-03-21 Thread shawn l.green

Hi David.

On 3/21/2014 1:42 PM, David Lerer wrote:

Frequently, we import a production dump that contains only 1 or 2 databases into one of our QA instances 
that contains many more databases. (i.e. database being a schema or a 
catalogue).
At the beginning of the import script, we first drop all objects in the QA 
database so that it will be a perfect match (object wise) to production.

Is there an easy way to lock the whole database for the duration of the import 
- so that no developers can update the database?
Obviously, I can revoke permissions, but I was wondering whether there is a 
better approach.



If you start with a DROP DATABASE   that will pretty much ensure 
that nobody gets back into it.


Then re-create your tables in a new DB (yyy)

As a last set of steps do

  CREATE DATABASE 
  RENAME TABLE yyy.table1 to .table1, yyy.table2 to .table2, 
  (repeat for all your tables).

  DROP DATABASE yyy


Because this is essentially a metadata flip, the RENAME will be quite 
speedy.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Database migration from default configuration to innodb_file_per_table

2014-03-21 Thread Christophe
Hi list,

I'd like your advice, (one more time ;) ) about this case :

The context is :

A huge database using InnoDB engine from filling about several years
(without possible shrinking, as I've seen, except dropping all databases
to recreate them ... ) the ibdata file , which is taking over 9GiB on
filesystem.

We have to separate data from databases in two cases , whilst it is
running , after setting innodb_file_per_table in MySQL configuration ,
and restarting service.

Creating First database, containing at oldest 6 months of data.
Second database, considered archive containing data older than 6 month.

Not such a problem to separate actual data : using several mysqldump
with --where switch, which handles the case.

After this, Shell scripts using INSERT INTO archive SELECT * FROM
realtime WHERE ... seem to be reliable to do this.

*But*, in this one timed scheduled task in data migration (Previewed and
accepted by customer, by night / not tonight ... :) ), and *I'd like to
remove the ibdata1 file* , as it takes huge disk space.

Migration task also includes converting old tables (previously in
InnoDB), to alter them into  InnoDB, to recreate the InnoDB file
using innodb_file_per_table parameter.

Problem :

While testing this in lab, I came to fact that removing ibdata1 file,
cancels MySQL to get reference to any table in databases .

use database works ... but DESCRIBE table goes to : table
database.table does not exist.

Is there anyway to handle this case ?

Regards .
Christophe.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Database migration from default configuration to innodb_file_per_table

2014-03-21 Thread shawn l.green

Hello Christophe,

On 3/21/2014 4:47 PM, Christophe wrote:

Hi list,

I'd like your advice, (one more time ;) ) about this case :

The context is :

A huge database using InnoDB engine from filling about several years
(without possible shrinking, as I've seen, except dropping all databases
to recreate them ... ) the ibdata file , which is taking over 9GiB on
filesystem.

We have to separate data from databases in two cases , whilst it is
running , after setting innodb_file_per_table in MySQL configuration ,
and restarting service.

Creating First database, containing at oldest 6 months of data.
Second database, considered archive containing data older than 6 month.

Not such a problem to separate actual data : using several mysqldump
with --where switch, which handles the case.

After this, Shell scripts using INSERT INTO archive SELECT * FROM
realtime WHERE ... seem to be reliable to do this.

*But*, in this one timed scheduled task in data migration (Previewed and
accepted by customer, by night / not tonight ... :) ), and *I'd like to
remove the ibdata1 file* , as it takes huge disk space.

Migration task also includes converting old tables (previously in
InnoDB), to alter them into  InnoDB, to recreate the InnoDB file
using innodb_file_per_table parameter.

Problem :

While testing this in lab, I came to fact that removing ibdata1 file,
cancels MySQL to get reference to any table in databases .

use database works ... but DESCRIBE table goes to : table
database.table does not exist.

Is there anyway to handle this case ?



The system is operating exactly as designed. The ibdata* file(s) contain 
more than just your data and indexes. This is the common tablespace and 
it contains all the metadata necessary to identify where *all* your 
InnoDB tables actually are (where they are in a tablespace and which 
tablespace they are in) and several other things about them. In the 
terms of the InnoDB developers, this is the data dictionary.  This 
means that once you blow it away, MySQL has no details about any where 
any of your InnoDB tables are, exactly as the message says.


The table names are visible in a SHOW TABLES command because that is 
essentially performing a directory listing of any .FRM files in that 
database's folder. Without both parts (the definition in the .FRM file 
and the metadata in the common tablespace) your tables are broken. If 
you have the .frm file, you can find out which columns you have defined, 
what data types they are, if the table is partitioned or not and what 
your indexes and other constraints look like. The .frm file cannot tell 
the InnoDB engine which tablespace a table is in or what offset the root 
page of the table is within the tablespace. That information was stored 
in the ibdata file that you erased during your test run.


The proper way to change the size of your common tablespace is 
documented here in the user manual

http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

Search for the section header Decreasing the Size of the InnoDB Tablespace

Best regards,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



ANN: Database Workbench 4.4.5, special MySQL discount!

2014-03-19 Thread Martijn Tonies (Upscene Productions)
ANN: Database Workbench 4.4.5, the multi-DBMS IDE now available!

Ladies, gentlemen,

Upscene Productions is proud to announce the next version 
of the popular Windows-based multi-DBMS development tool:

 Database Workbench 4.4.5 Pro 


March only: 25% discount on Database Workbench Pro for MySQL
using discount coupon MYSQLSPECIAL - additional modules can be
purchased at a later stage.


For more information, see http://www.upscene.com/go/?go=newsid=20140319


The FREE Lite Editions will be released at a later stage.



Database Workbench supports:
- Borland InterBase ( 6.x and up )
- Firebird ( 1.x and up )
- MS SQL Server/MSDE ( 7 and up )
- MySQL 4.x and up
- Oracle Database ( 8i and up )
- Sybase SQL Anywhere ( 9 and up )
- NexusDB ( 3.0 and up )






Thank you for your support,

Martijn Tonies
Database Workbench - the database developer tool for professionals
Upscene Productions
http://www.upscene.com

Database/kernel community topic at Collaboration Summit 2014

2014-03-10 Thread Mel Gorman
Hi,

Arrangements have been made to hold a meeting between database and kernel
developers at Collaboration Summit 2014 http://sched.co/1hEBRuq on March
27th 2014. This was organised after discussions on pain points encountered
by the PostgreSQL community. Originally the plan had been to just have a
topic for LSF/MM there was much more interest in the topic than anticipated
so the Collaboration Summit meeting will be much more open.

If there are developers attending Collaboration Summit that work in
the database or kernel communities, it would be great if you could come
along. Previous discussions were on the PostgreSQL list and that should be
expanded in case we accidentally build postgres-only features. The intent
is to identify the problems encountered by databases and where relevant,
test cases that can be used to demonstrate them if they exist.  While the
kernel community may be aware of some of the problems, they are not always
widely known or understood. There is a belief that some interfaces are fine
when in reality applications cannot use them properly. The ideal outcome
of the meeting would be concrete proposals on kernel features that could
be developed over the course of time to address any identified problem.

For reference, this is a summary of the discussion that took place when
the topic was proposed for LSF/MM.

Thanks.

---8---

On testing of modern kernels


Josh Berkus claims that most people are using Postgres with 2.6.19 and
consequently there may be poor awareness of recent kernel developments.
This is a disturbingly large window of opportunity for problems to have
been introduced.

Minimally, Postgres has concerns about IO-related stalls which may or may
not exist in current kernels. There were indications that large writes
starve reads. There have been variants of this style of bug in the past but
it's unclear what the exact shape of this problem is and if IO-less dirty
throttling affected it. It is possible that Postgres was burned in the past
by data being written back from reclaim context in low memory situations.
That would have looked like massive stalls with drops in IO throughput
but it was fixed in relatively recent kernels. Any data on historical
tests would be helpful. Alternatively, a pgbench-based reproduction test
could potentially be used by people in the kernel community that track
performance over time and have access to a suitable testing rig.

It was mentioned that Postgres has an tool called pg_test_fsync which
was mentioned in the context of testing different wal_sync_methods. Potentially
it could also be used for evaluating some kernel patches.

Gregory Smith highlighted the existence of a benchmark wrapper for pgbench
called pgbench-tools: https://github.com/gregs1104/pgbench-tools . It can
track statistics of interest to Postgres as well as report in interesting
metrics such as transaction latency. He had a lot of information on testing
requirements and some very interesting tuning information and it's worth
reading the whole mail

http://www.postgresql.org/message-id/52d99161.60...@gmail.com

Postgres bug reports and LKML
-

It is claimed that LKML does not welcome bug reports but it's less clear
what the basis of this claim is.  Is it because the reports are ignored? A
possible explanation is that they are simply getting lost in the LKML noise
and there would be better luck if the bug report was cc'd to a specific
subsystem list. A second possibility is the bug report is against an old
kernel and unless it is reproduced on a recent kernel the bug report will
be ignored. Finally it is possible that there is not enough data available
to debug the problem. The worst explanation is that to date the problem
has not been fixable but the details of this have been lost and are now
unknown. Is is possible that some of these bug reports can be refreshed
so at least there is a chance they get addressed?

Apparently there were changes to the reclaim algorithms that crippled
performance without any sysctls. The problem may be compounded by the
introduction of adaptive replacement cache in the shape of the thrash
detection patches currently being reviewed.  Postgres investigated the
use of ARC in the past and ultimately abandoned it. Details are in the
archives (http://www.Postgres.org/search/?m=1q=arcl=1d=-1s=r). I
have not read then, just noting they exist for future reference.

Sysctls to control VM behaviour are not popular as such tuning parameters
are often used as an excuse to not properly fix the problem. Would it be
possible to describe a test case that shows 2.6.19 performing well and a
modern kernel failing? That would give the VM people a concrete basis to
work from to either fix the problem or identify exactly what sysctls are
required to make this work.

I am confident that any bug related to VM reclaim in this area has been lost.
At least, I recall no instances of it being discussed on linux

ANN: Database Workbench 4.4.4, the multi-DBMS IDE now available!

2014-01-27 Thread Martijn Tonies (Upscene Productions)
ANN: Database Workbench 4.4.4, the multi-DBMS IDE now available!

Ladies, gentlemen,

Upscene Productions is proud to announce the next version 
of the popular Windows-based multi-DBMS development tool:

 Database Workbench 4.4.4 Pro 



For more information, see http://www.upscene.com/go/?go=newsid=20140127


The FREE Lite Editions will be released at a later stage.



Database Workbench supports:
- Borland InterBase ( 6.x and up )
- Firebird ( 1.x and up )
- MS SQL Server/MSDE ( 7 and up )
- MySQL 4.x and up
- Oracle Database ( 8i and up )
- Sybase SQL Anywhere ( 9 and up )
- NexusDB ( 3.0 and up )






Thank you for your support,

Martijn Tonies
Database Workbench - the database developer tool for professionals
Upscene Productions
http://www.upscene.com

ANN: MicroOLAP Database Designer for MySQL 2.1.2 is available!

2014-01-23 Thread Aleksander Andreev
This maintenance release is intended to solve several bugs and some minor
problems with the registration system .

Full change log:
[*] Registration system improved, human readable error messages introduced
[-] Access Violation during export to graphic in split into pages mode
bug fixed
[-] Cannot merge model with views or stored routines bug fixed
[-] Unable to find a Table of Contents when invoking help system bug fixed

You're welcome to download the Database Designer for MySQL 2.1.2 right now
at:
http://microolap.com/products/database/mysql-designer/download/

Login to your private area on our site at http://microolap.com/my/keys/ to
obtain your key if you have a license.

Please don't hesitate to ask any questions or report bugs with our Support
Ticketing system available at
http://www.microolap.com/support/


Re: ANN: MicroOLAP Database Designer for MySQL 2.1.2 is available!

2014-01-23 Thread Jan Steinman
Can you please add to your announcements that this product is Windows-only?

It took some searching on your website to come up with that info, and I dare 
say the majority of MySQL users here are non-Windows-based.

Thank you!

(Now I have to go find and delete that download...)

 A low-energy policy allows for a wide choice of lifestyles and cultures. 
If, on the other hand, a society opts for high energy consumption, its social 
relations must be dictated by technocracy and will be equally degrading whether 
labeled capitalist or socialist. -- Ivan Illich
 Jan Steinman, EcoReality Co-op 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: ANN: MicroOLAP Database Designer for MySQL 2.1.2 is available!

2014-01-23 Thread Aleksander Andreev
Hi Jan!

We are very sorry that we misled you.

However, our Database Designer for MySQL works well under WineHQ.

Best regards,

On Thu, Jan 23, 2014 at 11:35 PM, Jan Steinman j...@bytesmiths.com wrote:

 Can you please add to your announcements that this product is Windows-only?

 It took some searching on your website to come up with that info, and I
 dare say the majority of MySQL users here are non-Windows-based.

 Thank you!

 (Now I have to go find and delete that download...)

  A low-energy policy allows for a wide choice of lifestyles and
 cultures. If, on the other hand, a society opts for high energy
 consumption, its social relations must be dictated by technocracy and will
 be equally degrading whether labeled capitalist or socialist. -- Ivan Illich
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




-- 
Aleksandr Andreev
Developer
MicroOLAP Technologies LTD
aleksandr.andr...@gf.microolap.com
http://microolap.com


ANN: Database Workbench 4.4.3, the multi-DBMS IDE now available!

2013-11-27 Thread Martijn Tonies (Upscene Productions)
ANN: Database Workbench 4.4.3, the multi-DBMS IDE now available!

Ladies, gentlemen,

Upscene Productions is proud to announce the next version 
of the popular Windows-based multi-DBMS development tool:

 Database Workbench 4.4.3 Pro 

This release fixes a blocking problem after a recent Windows XP
security update.

There's a 15% autumn discount until the end of November!


For more information, see http://www.upscene.com/go/?go=newsid=20131127


The FREE Lite Editions are also updated, this includes several enhancements
compared to the previous available Lite Editions.



Database Workbench supports:
- Borland InterBase ( 6.x and up )
- Firebird ( 1.x and up )
- MS SQL Server/MSDE ( 7 and up )
- MySQL 4.x and up
- Oracle Database ( 8i and up )
- Sybase SQL Anywhere ( 9 and up )
- NexusDB ( 3.0 and up )






Thank you for your support,

Martijn Tonies
Database Workbench - the database developer tool for professionals
Upscene Productions
http://www.upscene.com

Re: How do I mysqldump different database tables to the same .sql file?

2013-11-22 Thread Johan De Meersman
--databases, methinks.

- Original Message -
 From: Daevid Vincent dae...@daevid.com
 To: mysql@lists.mysql.com
 Sent: Thursday, 21 November, 2013 10:44:39 PM
 Subject: How do I mysqldump different database tables to the same .sql file?
 
 I'm working on some code where I am trying to merge two customer accounts
 (we get people signing up under different usernames, emails, or just create
 a new account sometimes). I want to test it, and so I need a way to restore
 the data in the particular tables. Taking a dump of all the DBs and tables
 is not feasible as it's massive, and importing (with indexes) takes HOURS. I
 just want only the tables that are relevant. I can find all the tables that
 have `customer_id` in them with this magic incantation:
  
 SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS` WHERE
 `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
  
 Then I crafted this, but it pukes on the db name portion. :-(
  
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --tables
 member_sessions.users_last_login support.tickets mydb1.clear_passwords
 mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
 mydb1.customers_free_tracking mydb1.customers_log
 mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
 mydb1content.actors_comments mydb1content.actor_collections
 mydb1content.actor_likes_users mydb1content.collections
 mydb1content.dvd_likes_users mydb1content.free_videos
 mydb1content.genre_collections mydb1content.playlists
 mydb1content.poll_votes mydb1content.scenes_comments
 mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
 mydb1content.scene_likes_users mydb1content.videos_downloaded
 mydb1content.videos_viewed  merge_backup.sql
  
 -- Connecting to localhost...
 mysqldump: Got error: 1049: Unknown database
 'member_sessions.users_last_login' when selecting the database
 -- Disconnecting from localhost...
  
 I searched a bit and found that it seems I have to split this into multiple
 statements and append like I'm back in 1980. *sigh*
  
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 member_sessions --tables users_last_login  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 support --tables tickets  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 mydb1 --tables clear_passwords customers customers_free
 customers_free_tracking customers_log customers_subscriptions
 customers_transactions players  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 content --tables actors_comments actor_collections actor_likes_users
 collections dvd_likes_users free_videos genre_collections playlists
 poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
 scene_likes_users videos_downloaded videos_viewed  merge_backup.sql
  
 The critical flaw here is that the mysqldump program does NOT put the
 necessary USE DATABASE statement in each of these dumps since there is
 only one DB after the -database apparently. UGH. Nor do I see a command line
 option to force it to output this seemingly obvious statement.
  
 It's a pretty significant shortcoming of mysqldump if you ask me that I
 can't do it the way I had it in the first example since that's pretty much
 standard SQL convetion of db.table.column format. And even more baffling is
 why it wouldn't dump out the USE statement always even if there is only
 one DB. It's a few characters and would save a lot of headaches in case
 someone tried to dump their .sql file into the wrong DB on accident.
  
 Plus it's not easy to edit a 2.6GB file to manually insert these USE
 lines.
  
 Is there a way to do this with some command line option I'm not seeing in
 the man page?
 

-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How do I mysqldump different database tables to the same .sql file?

2013-11-21 Thread Michael Dykman
There is a good reason that the USE database is not output in those dumps..
 it would make the tool very difficult to use for moving data around.

If I might suggest, a simple workaround is to create a shell script along
these lines..  you might to do something a little more sophisticated.

#
#!/bin/sh

echo  USE `database1`;  outflfile.sql
mysqldump -(firstsetofoptions)  outfile.sql
echo  USE `database2`;  outflfile.sql
mysqldump -(secondsetofoptions)  outfile.sql




On Thu, Nov 21, 2013 at 4:44 PM, Daevid Vincent dae...@daevid.com wrote:

 I'm working on some code where I am trying to merge two customer accounts
 (we get people signing up under different usernames, emails, or just create
 a new account sometimes). I want to test it, and so I need a way to restore
 the data in the particular tables. Taking a dump of all the DBs and tables
 is not feasible as it's massive, and importing (with indexes) takes HOURS.
 I
 just want only the tables that are relevant. I can find all the tables that
 have `customer_id` in them with this magic incantation:

 SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS`
 WHERE
 `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`

 Then I crafted this, but it pukes on the db name portion. :-(

 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --tables
 member_sessions.users_last_login support.tickets mydb1.clear_passwords
 mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
 mydb1.customers_free_tracking mydb1.customers_log
 mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
 mydb1content.actors_comments mydb1content.actor_collections
 mydb1content.actor_likes_users mydb1content.collections
 mydb1content.dvd_likes_users mydb1content.free_videos
 mydb1content.genre_collections mydb1content.playlists
 mydb1content.poll_votes mydb1content.scenes_comments
 mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
 mydb1content.scene_likes_users mydb1content.videos_downloaded
 mydb1content.videos_viewed  merge_backup.sql

 -- Connecting to localhost...
 mysqldump: Got error: 1049: Unknown database
 'member_sessions.users_last_login' when selecting the database
 -- Disconnecting from localhost...

 I searched a bit and found that it seems I have to split this into multiple
 statements and append like I'm back in 1980. *sigh*

 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 member_sessions --tables users_last_login  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 support --tables tickets  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 mydb1 --tables clear_passwords customers customers_free
 customers_free_tracking customers_log customers_subscriptions
 customers_transactions players  merge_backup.sql
 mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
 --complete-insert --insert-ignore --create-options --quick --force
 --set-charset --disable-keys --quote-names --comments --verbose --database
 content --tables actors_comments actor_collections actor_likes_users
 collections dvd_likes_users free_videos genre_collections playlists
 poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
 scene_likes_users videos_downloaded videos_viewed  merge_backup.sql

 The critical flaw here is that the mysqldump program does NOT put the
 necessary USE DATABASE statement in each of these dumps since there is
 only one DB after the -database apparently. UGH. Nor do I see a command
 line
 option to force it to output this seemingly obvious statement.

 It's a pretty significant shortcoming of mysqldump if you ask me that I
 can't do it the way I had it in the first example since that's pretty much
 standard SQL convetion of db.table.column format. And even more baffling is
 why it wouldn't dump out the USE statement always even if there is only
 one DB. It's a few characters and would save a lot of headaches in case
 someone tried to dump their .sql file into the wrong DB on accident.

 Plus it's not easy to edit a 2.6GB file to manually insert these USE
 lines.

 Is there a way to do this with some command line option I'm not seeing in
 the man page?




-- 
 - michael dykman
 - mdyk...@gmail.com

 May the Source be with you.


RE: How do I mysqldump different database tables to the same .sql file?

2013-11-21 Thread Daevid Vincent
Except that it outputs the USE statement if you have more than one
database, so your theory doesn't hold a lot of water IMHO. Not to mention
it's near the very top of the output so it's pretty easy to trim it off if
you REALLY needed to move the DB (which I presume is not as frequently as
simply wanting a backup/dump of a database to restore).

Thanks for the shell script suggestion, that is what I've done already to
work around this silliness.

 -Original Message-
 From: Michael Dykman [mailto:mdyk...@gmail.com]
 Sent: Thursday, November 21, 2013 1:59 PM
 To: MySql
 Subject: Re: How do I mysqldump different database tables to the same .sql
 file?
 
 There is a good reason that the USE database is not output in those
dumps..
  it would make the tool very difficult to use for moving data around.
 
 If I might suggest, a simple workaround is to create a shell script along
 these lines..  you might to do something a little more sophisticated.
 
 #
 #!/bin/sh
 
 echo  USE `database1`;  outflfile.sql
 mysqldump -(firstsetofoptions)  outfile.sql
 echo  USE `database2`;  outflfile.sql
 mysqldump -(secondsetofoptions)  outfile.sql
 
 
 
 
 On Thu, Nov 21, 2013 at 4:44 PM, Daevid Vincent dae...@daevid.com wrote:
 
  I'm working on some code where I am trying to merge two customer
accounts
  (we get people signing up under different usernames, emails, or just
 create
  a new account sometimes). I want to test it, and so I need a way to
 restore
  the data in the particular tables. Taking a dump of all the DBs and
tables
  is not feasible as it's massive, and importing (with indexes) takes
HOURS.
  I
  just want only the tables that are relevant. I can find all the tables
 that
  have `customer_id` in them with this magic incantation:
 
  SELECT `TABLE_NAME`,`TABLE_SCHEMA` FROM `information_schema`.`COLUMNS`
  WHERE
  `COLUMN_NAME` = 'customer_id' ORDER BY `TABLE_SCHEMA`, `TABLE_NAME`
 
  Then I crafted this, but it pukes on the db name portion. :-(
 
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose --tables
  member_sessions.users_last_login support.tickets mydb1.clear_passwords
  mydb1.crak_subscriptions mydb1.customers mydb1.customers_free
  mydb1.customers_free_tracking mydb1.customers_log
  mydb1.customers_subscriptions mydb1.customers_transactions mydb1.players
  mydb1content.actors_comments mydb1content.actor_collections
  mydb1content.actor_likes_users mydb1content.collections
  mydb1content.dvd_likes_users mydb1content.free_videos
  mydb1content.genre_collections mydb1content.playlists
  mydb1content.poll_votes mydb1content.scenes_comments
  mydb1content.scenes_ratings_users_new2 mydb1content.scene_collections
  mydb1content.scene_likes_users mydb1content.videos_downloaded
  mydb1content.videos_viewed  merge_backup.sql
 
  -- Connecting to localhost...
  mysqldump: Got error: 1049: Unknown database
  'member_sessions.users_last_login' when selecting the database
  -- Disconnecting from localhost...
 
  I searched a bit and found that it seems I have to split this into
 multiple
  statements and append like I'm back in 1980. *sigh*
 
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  member_sessions --tables users_last_login  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  support --tables tickets  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  mydb1 --tables clear_passwords customers customers_free
  customers_free_tracking customers_log customers_subscriptions
  customers_transactions players  merge_backup.sql
  mysqldump -uroot -proot --skip-opt --add-drop-table --extended-insert
  --complete-insert --insert-ignore --create-options --quick --force
  --set-charset --disable-keys --quote-names --comments --verbose
--database
  content --tables actors_comments actor_collections actor_likes_users
  collections dvd_likes_users free_videos genre_collections playlists
  poll_votes scenes_comments scenes_ratings_users_new2 scene_collections
  scene_likes_users videos_downloaded videos_viewed  merge_backup.sql
 
  The critical flaw here is that the mysqldump program does NOT put the
  necessary USE DATABASE statement in each of these dumps since there is
  only one DB after the -database apparently. UGH. Nor do I see a command
  line
  option to force it to output this seemingly

  1   2   3   4   5   6   7   8   9   10   >