Re: can I just encrypt tables? what about the app?

2016-02-29 Thread shawn l.green



On 2/29/2016 3:13 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:54 schrieb Gary Smith:

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better


Only to a degree.


no - not only to a degree - when the question is "not store anything
unencrypted on the disk" the is no degree, but or if


Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then anything
which gives you console access gives you (potentially) access to the
underlying database files. If you can get those, it's trivial to get
access to the dataset that they contain.

However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory. At this point, you're needing to do attacks on RAM to get access
to the data - and even then, you're unlikely to get 3 bars for a jackpot
payout of the whole database schema, assuming a decent sized database.


in theory

in reality you don't need to hack around in the RAM - mysqld needs to
have access to key for operate with the data and so you need to find
only that piece

the same for encryption on the application side before send data to the
db-layer - see the start and subject of that thread how far people are
away from understanding how and on what layer things are encrypted and
what excatly is protected in which context

there is no "turn this on and you are safe" without deeper understanding



Correct. As long as the key and the lock are on the same machine, there 
will be some way of opening that lock. It's just a matter of how hard 
can you make it to find that key. No data is perfectly safe. No crypto 
is unbreakable. Ever.


Maybe the key only exists in memory while the daemon runs? You can hack 
the memory to find the key.


Maybe the key is retrieved from another key service daemon. If you have 
the credentials to impersonate a valid retriever, you are in the money.


The purpose of any encryption system is not to make it impossible to 
read the data. It's purpose is to make it impractically hard for any 
unauthorized parties to read it.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Reindl Harald



Am 29.02.2016 um 20:54 schrieb Gary Smith:

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better


Only to a degree.


no - not only to a degree - when the question is "not store anything 
unencrypted on the disk" the is no degree, but or if



Once the disk is unencrypted, you've got access to the
filesystem. If you've got physical access to the machine, then anything
which gives you console access gives you (potentially) access to the
underlying database files. If you can get those, it's trivial to get
access to the dataset that they contain.

However, if TDE is employed, then you've got another significant
obstacle to overcome: The data is only encrypted (aiui) once it's in
memory. At this point, you're needing to do attacks on RAM to get access
to the data - and even then, you're unlikely to get 3 bars for a jackpot
payout of the whole database schema, assuming a decent sized database.


in theory

in reality you don't need to hack around in the RAM - mysqld needs to 
have access to key for operate with the data and so you need to find 
only that piece


the same for encryption on the application side before send data to the 
db-layer - see the start and subject of that thread how far people are 
away from understanding how and on what layer things are encrypted and 
what excatly is protected in which context


there is no "turn this on and you are safe" without deeper understanding



signature.asc
Description: OpenPGP digital signature


Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Gary Smith

On 29/02/2016 19:54, Gary Smith wrote:
However, if TDE is employed, then you've got another significant 
obstacle to overcome: The data is only encrypted (aiui) once it's in 
memory.

Apologies, that should read "unencrypted (aiui) once it's in memory"

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Gary Smith

On 29/02/2016 19:50, Reindl Harald wrote:


cryptsetup/luks can achieve that way better

Only to a degree. Once the disk is unencrypted, you've got access to the 
filesystem. If you've got physical access to the machine, then anything 
which gives you console access gives you (potentially) access to the 
underlying database files. If you can get those, it's trivial to get 
access to the dataset that they contain.


However, if TDE is employed, then you've got another significant 
obstacle to overcome: The data is only encrypted (aiui) once it's in 
memory. At this point, you're needing to do attacks on RAM to get access 
to the data - and even then, you're unlikely to get 3 bars for a jackpot 
payout of the whole database schema, assuming a decent sized database.


Cheers,

Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Reindl Harald



Am 29.02.2016 um 20:30 schrieb shawn l.green:

Hi Reindl,

On 2/29/2016 2:16 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE)
for InnoDB tables. If you use that, it is as the name suggest
transparent for PHP. See also:
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html



i still don't grok a usecase for such encryption because when a
webserver got compromised you have the same access as before, just
solwer with more overhead in general

what is the purpose of encryption on that layer?


Some process requirements state that some data should never be stored on
disk in plain text. This is one way to meet those requirements.


cryptsetup/luks can achieve that way better


Some data has been compromised not by cracking the primary database but
by breaking into a server containing backups of the data. This new
feature allows file-level backups (like those generated by MySQL
Enterprise Backup) to be secure.


well, somewhere must be the key to decrypt the data anyways, otherwise 
mysqld couldn't operate with the data - how do you protect that one 
without run in a chicken-egg problem


in the worst case it introduces the risk that some clueless guys become 
more careless "because my data is encrypted" somewhere else and don't 
grok what and what not is safe



What that feature achieves is that the data will be encrypted at rest,
not just in flight (using SSL).


see above


Clearly, this does not defeat an attacker who is able to compromise or
become an authenticated client who is normally allowed to read that
data. To fix that problem, you must employ application-level encryption
which encodes the data actually stored on the table. Clearly this last
type of encryption breaks the database server's ability to index the
data as the server would have no key to decrypt the content of the
fields to build any normal (clear-content) indexes on it. It would only
be able to index the encrypted (opaque) data. The clients would need to
code their queries with WHERE clauses looking for the exact encrypted
values they wanted to find


and even there you have the same problem: as long your application works 
with however encrypted data it needs to have they key somewhere and when 
i compromise your server i can read that key too




signature.asc
Description: OpenPGP digital signature


Re: can I just encrypt tables? what about the app?

2016-02-29 Thread shawn l.green

Hi Reindl,

On 2/29/2016 2:16 PM, Reindl Harald wrote:



Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE)
for InnoDB tables. If you use that, it is as the name suggest
transparent for PHP. See also:
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html


i still don't grok a usecase for such encryption because when a
webserver got compromised you have the same access as before, just
solwer with more overhead in general

what is the purpose of encryption on that layer?




Some process requirements state that some data should never be stored on 
disk in plain text. This is one way to meet those requirements.


Some data has been compromised not by cracking the primary database but 
by breaking into a server containing backups of the data. This new 
feature allows file-level backups (like those generated by MySQL 
Enterprise Backup) to be secure.


What that feature achieves is that the data will be encrypted at rest, 
not just in flight (using SSL).


Clearly, this does not defeat an attacker who is able to compromise or 
become an authenticated client who is normally allowed to read that 
data. To fix that problem, you must employ application-level encryption 
which encodes the data actually stored on the table. Clearly this last 
type of encryption breaks the database server's ability to index the 
data as the server would have no key to decrypt the content of the 
fields to build any normal (clear-content) indexes on it. It would only 
be able to index the encrypted (opaque) data. The clients would need to 
code their queries with WHERE clauses looking for the exact encrypted 
values they wanted to find.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Integrated Cloud Applications & Platform Services
Office: Blountville, TN

Become certified in MySQL! Visit https://www.mysql.com/certification/ 
for details.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Reindl Harald



Am 29.02.2016 um 20:07 schrieb Jesper Wisborg Krogh:

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new,
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE)
for InnoDB tables. If you use that, it is as the name suggest
transparent for PHP. See also:
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html


i still don't grok a usecase for such encryption because when a 
webserver got compromised you have the same access as before, just 
solwer with more overhead in general


what is the purpose of encryption on that layer?




signature.asc
Description: OpenPGP digital signature


Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Jesper Wisborg Krogh

Hi Lejeczek,

On 1/03/2016 00:31, lejeczek wrote:

hi everybody

a novice type of question - having a php + mysql, can one just encrypt 
(internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this new, 
internal encryption?


Starting with MysQL 5.7.11, there is transparent data encryption (TDE) 
for InnoDB tables. If you use that, it is as the name suggest 
transparent for PHP. See also: 
https://dev.mysql.com/doc/refman/5.7/en/innodb-tablespace-encryption.html


Best regards,
Jesper Krogh
MySQL Support

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Ah, ok, if I understand correctly within this context every record in the
one table _should_ have a unique identifier.  Please verify this is the
case, though, if for example the primary key is an auto increment what I'm
going to suggest is not good and Really Bad Things will, not may, happen.

If you want to do this all in MySQL, and IFF the records are ensured to be
*globally unique*, then what I suggested previously would work but isn't
necessary (and is actually dangerous if global record uniqueness is not
definite).  Uou _could_ do a standard mysqldump (use flags to do data only,
no schema) and on the importing server it will insert the records and if
there are duplicates records they will fail. If there is a chance the
records aren't unique, or if you want to be extra super safe (good idea
anyway), you can add triggers on the ingest server to ensure
uniqueness/capture failures and record them in another table for analysis
or perhaps even to immediate data remediation (update key) and do insert.

Now, for me, using triggers or other business-logic-in-database features is
a code smell.  I loath putting business logic in databases as they tend to
be non-portable and are hard to troubleshoot for people behind me that is
expecting to have logic in code.  Since you're having to script this
behavior out anyway, if it were me I would dump the data in the table to
CSV or similar using INSERT INTO OUTFILE rather than mysqldump, ship the
file, and have a small php script on cron or whatever ingest it, allowing
for your business logic for data validate/etc to be done in code (IMO where
it belongs).

S



On Mon, Feb 29, 2016 at 12:12 PM, lejeczek  wrote:

> On 29/02/16 16:32, Steven Siebert wrote:
>
>> What level of control do you have on the remote end that is
>> collecting/dumping the data?  Can you specify the command/arguments on how
>> to dump?  Is it possible to turn on binary logging and manually ship the
>> logs rather than shipping the dump, effectively manually doing
>> replication?
>>
> in an overview it's a simple php app, a form of a questionnaire that
> collects user manual input, db backend is similarly simple, just one table.
> Yes I can operate mysqldump command but nothing else, I do not have
> control over mysql config nor processes.
>
> It's one of those cases when for now it's too late and you are only
> thinking - ough... that remote box, if compromised would be good to have
> only a minimal set of data on it.
>
> So I can mysqldump any way it'd be best and I'd have to insert ideally not
> replacing anything, instead aggregating, adding data.
> I think developers took care of uniqueness of the rows, and constructed it
> in conformity with good design practices.
>
> What I'm only guessing is when I lock, dump and remove then insert,
> aggregate could there be problems with keys? And no data loss during
> dump+removal?
>
> thanks for sharing your thoughts.
>
>
>> I agree with others, in general this approach smells like a bad idea.
>> However, updating data from a remote system in batch is quite common,
>> except often it's done at the application level polling things like web
>> services and perhaps some business logic to ensure integrity is
>> maintained.  Attempting to do it within the constructs of the database
>> itself is understandable, but there are risks when not adding that "layer"
>> of logic to ensure state is exactly as you expect it during a merge.
>>
>> At risk of giving you too much rope to hang yourself: if you use mysqldump
>> to dump the database, if you use the --replace flag you'll convert all
>> INSERT statements to REPLACE, which when you merge will update or insert
>> the record, effectively "merging" the data.  This may be one approach you
>> want to look at, but may not be appropriate depending on your specific
>> situation.
>>
>> S
>>
>>
>>
>> On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:
>>
>> On 29/02/16 15:42, Gary Smith wrote:
>>>
>>> On 29/02/2016 15:30, lejeczek wrote:

 On 28/02/16 20:50, lejeczek wrote:
>
> fellow users, hopefully you experts too, could help...
>>
>> ...me to understand how, and what should be the best practice to dump
>> database, then drop it and merge the dumps..
>> What I'd like to do is something probably many have done and I wonder
>> how it's done best.
>> A box will be dumping a database (maybe? tables if it's better) then
>> dropping (purging the data) it and on a different system that dump
>> swill be
>> inserted/aggregated into the same database.
>> It reminds me a kind of incremental backup except for the fact that
>> source data will be dropped/purged on regular basis, but before a
>> drop, a
>> dump which later will be used to sort of reconstruct that same
>> database.
>>
>> How do you recommend to do it? I'm guessing trickiest bit might this
>> reconstruction part, how to merge dumps safely, 

Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
Totally with you, I had to get up and wash my hands after writing such
filth =)

On Mon, Feb 29, 2016 at 12:14 PM, Gary Smith  wrote:

> On 29/02/2016 16:32, Steven Siebert wrote:
>
>>
>> At risk of giving you too much rope to hang yourself: if you use
>> mysqldump to dump the database, if you use the --replace flag you'll
>> convert all INSERT statements to REPLACE, which when you merge will update
>> or insert the record, effectively "merging" the data.  This may be one
>> approach you want to look at, but may not be appropriate depending on your
>> specific situation.
>>
>> I'd considered mentioning this myself, but this was the root of my
> comment about integrity - if the original database or tables are dropped,
> then the replace command will cause the data to poo all over the original
> dataset. As you mentioned in your (snipped) reply, this can go badly wrong
> in a short space of time without the correct controls in place. Even if
> they are in place, I'd have trouble sleeping at night if this were my
> circus.
>
> Gary
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 16:32, Steven Siebert wrote:


At risk of giving you too much rope to hang yourself: if you use 
mysqldump to dump the database, if you use the --replace flag you'll 
convert all INSERT statements to REPLACE, which when you merge will 
update or insert the record, effectively "merging" the data.  This may 
be one approach you want to look at, but may not be appropriate 
depending on your specific situation.


I'd considered mentioning this myself, but this was the root of my 
comment about integrity - if the original database or tables are 
dropped, then the replace command will cause the data to poo all over 
the original dataset. As you mentioned in your (snipped) reply, this can 
go badly wrong in a short space of time without the correct controls in 
place. Even if they are in place, I'd have trouble sleeping at night if 
this were my circus.


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 16:32, Steven Siebert wrote:

What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?
in an overview it's a simple php app, a form of a 
questionnaire that collects user manual input, db backend is 
similarly simple, just one table.
Yes I can operate mysqldump command but nothing else, I do 
not have control over mysql config nor processes.


It's one of those cases when for now it's too late and you 
are only thinking - ough... that remote box, if compromised 
would be good to have only a minimal set of data on it.


So I can mysqldump any way it'd be best and I'd have to 
insert ideally not replacing anything, instead aggregating, 
adding data.
I think developers took care of uniqueness of the rows, and 
constructed it in conformity with good design practices.


What I'm only guessing is when I lock, dump and remove then 
insert, aggregate could there be problems with keys? And no 
data loss during dump+removal?


thanks for sharing your thoughts.


I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:


On 29/02/16 15:42, Gary Smith wrote:


On 29/02/2016 15:30, lejeczek wrote:


On 28/02/16 20:50, lejeczek wrote:


fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then
dropping (purging the data) it and on a different system that dump swill be
inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that
source data will be dropped/purged on regular basis, but before a drop, a
dump which later will be used to sort of reconstruct that same database.

How do you recommend to do it? I'm guessing trickiest bit might this
reconstruction part, how to merge dumps safely, naturally while maintaining
consistency & integrity?
Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider -

should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first
cleansed then some data was collected) to merge data again would it work
and merge that newly collected data with what's already in the database


This sounds like a remarkably reliable way to ensure no data integrity.
What exactly are you trying to achieve? Would replication be the magic word
you're after?

I realize this all might look rather like a bird fiddling with a worm

instead of lion going for quick kill. I replicate wherever I need and can,
here a have very little control over one end.
On that end with little control there is one simple database, which data
I'll need to be removed on regular basis, before removing I'll be dumping
and I need to use those dumps to add, merge, aggregate data to a database
on the other end, like:
today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then collect some
and again, dump then remove
and these dumps should reconstruct the database on the other box.

Pointers on what to pay the attention to, how to test for consistency &
integrity, would be of great help.


Gary



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Steven Siebert
What level of control do you have on the remote end that is
collecting/dumping the data?  Can you specify the command/arguments on how
to dump?  Is it possible to turn on binary logging and manually ship the
logs rather than shipping the dump, effectively manually doing replication?

I agree with others, in general this approach smells like a bad idea.
However, updating data from a remote system in batch is quite common,
except often it's done at the application level polling things like web
services and perhaps some business logic to ensure integrity is
maintained.  Attempting to do it within the constructs of the database
itself is understandable, but there are risks when not adding that "layer"
of logic to ensure state is exactly as you expect it during a merge.

At risk of giving you too much rope to hang yourself: if you use mysqldump
to dump the database, if you use the --replace flag you'll convert all
INSERT statements to REPLACE, which when you merge will update or insert
the record, effectively "merging" the data.  This may be one approach you
want to look at, but may not be appropriate depending on your specific
situation.

S



On Mon, Feb 29, 2016 at 11:12 AM, lejeczek  wrote:

> On 29/02/16 15:42, Gary Smith wrote:
>
>> On 29/02/2016 15:30, lejeczek wrote:
>>
>>> On 28/02/16 20:50, lejeczek wrote:
>>>
 fellow users, hopefully you experts too, could help...

 ...me to understand how, and what should be the best practice to dump
 database, then drop it and merge the dumps..
 What I'd like to do is something probably many have done and I wonder
 how it's done best.
 A box will be dumping a database (maybe? tables if it's better) then
 dropping (purging the data) it and on a different system that dump swill be
 inserted/aggregated into the same database.
 It reminds me a kind of incremental backup except for the fact that
 source data will be dropped/purged on regular basis, but before a drop, a
 dump which later will be used to sort of reconstruct that same database.

 How do you recommend to do it? I'm guessing trickiest bit might this
 reconstruction part, how to merge dumps safely, naturally while maintaining
 consistency & integrity?
 Actual syntax, as usually any code examples are, would be best.

 many thanks.


 I guess dropping a tables is not really what I should even consider -
>>> should I just be deleting everything from tables in order to remove data?
>>> And if I was to use dumps of such a database (where data was first
>>> cleansed then some data was collected) to merge data again would it work
>>> and merge that newly collected data with what's already in the database
>>>
>> This sounds like a remarkably reliable way to ensure no data integrity.
>> What exactly are you trying to achieve? Would replication be the magic word
>> you're after?
>>
>> I realize this all might look rather like a bird fiddling with a worm
> instead of lion going for quick kill. I replicate wherever I need and can,
> here a have very little control over one end.
> On that end with little control there is one simple database, which data
> I'll need to be removed on regular basis, before removing I'll be dumping
> and I need to use those dumps to add, merge, aggregate data to a database
> on the other end, like:
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then collect some
> and again, dump then remove
> and these dumps should reconstruct the database on the other box.
>
> Pointers on what to pay the attention to, how to test for consistency &
> integrity, would be of great help.
>
>
> Gary
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: dump, drop database then merge/aggregate

2016-02-29 Thread Johan De Meersman
- Original Message -
> From: "lejeczek" 
> Subject: Re: dump, drop database then merge/aggregate
> 
> today both databases are mirrored/identical
> tonight awkward end will dump then remove all the data, then
> collect some and again, dump then remove
> and these dumps should reconstruct the database on the other
> box.

It sounds like a horrible mess, to be honest. It's also pretty hard to 
recommend possible paths without knowing what's inside. Is it an option for you 
to simply import the distinct dumps into different schemas? That way there 
would be no need for merging the data, you just query the particular dataset 
you're interested in.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 29/02/16 15:42, Gary Smith wrote:

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the 
dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for 
the fact that source data will be dropped/purged on 
regular basis, but before a drop, a dump which later 
will be used to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest 
bit might this reconstruction part, how to merge dumps 
safely, naturally while maintaining consistency & 
integrity?
Actual syntax, as usually any code examples are, would 
be best.


many thanks.


I guess dropping a tables is not really what I should 
even consider - should I just be deleting everything from 
tables in order to remove data?
And if I was to use dumps of such a database (where data 
was first cleansed then some data was collected) to merge 
data again would it work and merge that newly collected 
data with what's already in the database
This sounds like a remarkably reliable way to ensure no 
data integrity. What exactly are you trying to achieve? 
Would replication be the magic word you're after?


I realize this all might look rather like a bird fiddling 
with a worm instead of lion going for quick kill. I 
replicate wherever I need and can, here a have very little 
control over one end.
On that end with little control there is one simple 
database, which data I'll need to be removed on regular 
basis, before removing I'll be dumping and I need to use 
those dumps to add, merge, aggregate data to a database on 
the other end, like:

today both databases are mirrored/identical
tonight awkward end will dump then remove all the data, then 
collect some and again, dump then remove
and these dumps should reconstruct the database on the other 
box.


Pointers on what to pay the attention to, how to test for 
consistency & integrity, would be of great help.



Gary




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread Gary Smith

On 29/02/2016 15:30, lejeczek wrote:

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best practice to dump 
database, then drop it and merge the dumps..
What I'd like to do is something probably many have done and I wonder 
how it's done best.
A box will be dumping a database (maybe? tables if it's better) then 
dropping (purging the data) it and on a different system that dump 
swill be inserted/aggregated into the same database.
It reminds me a kind of incremental backup except for the fact that 
source data will be dropped/purged on regular basis, but before a 
drop, a dump which later will be used to sort of reconstruct that 
same database.


How do you recommend to do it? I'm guessing trickiest bit might this 
reconstruction part, how to merge dumps safely, naturally while 
maintaining consistency & integrity?

Actual syntax, as usually any code examples are, would be best.

many thanks.


I guess dropping a tables is not really what I should even consider - 
should I just be deleting everything from tables in order to remove data?
And if I was to use dumps of such a database (where data was first 
cleansed then some data was collected) to merge data again would it 
work and merge that newly collected data with what's already in the 
database
This sounds like a remarkably reliable way to ensure no data integrity. 
What exactly are you trying to achieve? Would replication be the magic 
word you're after?


Gary

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: dump, drop database then merge/aggregate

2016-02-29 Thread lejeczek

On 28/02/16 20:50, lejeczek wrote:

fellow users, hopefully you experts too, could help...

...me to understand how, and what should be the best 
practice to dump database, then drop it and merge the dumps..
What I'd like to do is something probably many have done 
and I wonder how it's done best.
A box will be dumping a database (maybe? tables if it's 
better) then dropping (purging the data) it and on a 
different system that dump swill be inserted/aggregated 
into the same database.
It reminds me a kind of incremental backup except for the 
fact that source data will be dropped/purged on regular 
basis, but before a drop, a dump which later will be used 
to sort of reconstruct that same database.


How do you recommend to do it? I'm guessing trickiest bit 
might this reconstruction part, how to merge dumps safely, 
naturally while maintaining consistency & integrity?
Actual syntax, as usually any code examples are, would be 
best.


many thanks.


I guess dropping a tables is not really what I should even 
consider - should I just be deleting everything from tables 
in order to remove data?
And if I was to use dumps of such a database (where data was 
first cleansed then some data was collected) to merge data 
again would it work and merge that newly collected data with 
what's already in the database?



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread lejeczek

On 29/02/16 14:38, Steven Siebert wrote:

Simple answer is no. What are you trying to accomplish?

I was hoping that with this new feature (google's) where 
mysql itself, internally uses keys to encrypt/decrypt tables 
or tablespaces I could just secure data, simply.
Chance is I don't quite get the concept, I believe I have 
one table encrypted (trying stings on it suggests) yet I can 
just query it and dump as normal.
I understand it's kind of database-file encryption, 
protection against just grabbing a file and trying to run it 
somewhere else, am I right?
If above is the case the from php perspective nothing should 
be different, it should be transparent, no?


many thanks

S

On Mon, Feb 29, 2016 at 8:31 AM, lejeczek 
> wrote:


hi everybody

a novice type of question - having a php + mysql, can
one just encrypt (internally in mysql) tables and php
will be fine?
If not, would it be easy to re-code php to work with
this new, internal encryption?

thanks.

-- 
MySQL General Mailing List

For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: can I just encrypt tables? what about the app?

2016-02-29 Thread Steven Siebert
Simple answer is no. What are you trying to accomplish?

S

On Mon, Feb 29, 2016 at 8:31 AM, lejeczek  wrote:

> hi everybody
>
> a novice type of question - having a php + mysql, can one just encrypt
> (internally in mysql) tables and php will be fine?
> If not, would it be easy to re-code php to work with this new, internal
> encryption?
>
> thanks.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


can I just encrypt tables? what about the app?

2016-02-29 Thread lejeczek

hi everybody

a novice type of question - having a php + mysql, can one 
just encrypt (internally in mysql) tables and php will be fine?
If not, would it be easy to re-code php to work with this 
new, internal encryption?


thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql