ember table for
>> all members that has not got an entry in status with year=2018
>>
>> I have been working on the following query to achieve this, but it only
>> returns data when there is no `year` entries for a selected year.
>>
>> select details.ident, give
returns data when there is no `year` entries for a selected year.
select details.ident, given, surname from details left join status on
details.ident = status.ident where NOT EXISTS (select year from status
where (status.year = 2018) and (details.ident = status.ident) )
Thank you for looking at
2017
1 2018
3 2018
I want my query to return the name and ident from the member table for all
members that has not got an entry in status with year=2018
I have been working on the following query to achieve this, but it only
returns data when there is no `year` entries for a
2017/02/18 ... debt:
Is there a formula to change the format of the data in a single field in
every record of a table? She has a "timestamp” in a text field formatted as
2017|02|16|04|58|42 and she wants to convert it to a more human readable format like
2017-02-16 @ 04:58:42
Am 20.02.2017 um 10:35 schrieb Lucio Chiappetti:
On Sat, 18 Feb 2017, debt wrote:
How does one "grab" the existing data and then change it? Can this
be done solely in MySQL
I am not sure to understand your question ... you usually manipulate
data inside mysql ... but her
On Sat, 18 Feb 2017, debt wrote:
How does one "grab" the existing data and then change it? Can this
be done solely in MySQL
I am not sure to understand your question ... you usually manipulate data
inside mysql ... but here it seems to me you are not talking of cha
used
for viewing, not for searching or sorting, so a text field worked perfectly
fine for that purpose. However, after reading everyone’s replies, I convinced
her to change that field to DATETIME and to lose the ‘@‘. Now they have the
best of both worlds - more readable data and a true DATET
Erm.
I've seen some weird responses to this. Yes, you can do this.
First -- get the data into a usable format. Then, put it into a usable
format (eg, timestamp for datetime field).
Read up on how mysql interprets date/time data on fields. And, create a
new timestamp or date field.
The
Am 19.02.2017 um 11:11 schrieb Peter Brawley:
On 2/18/2017 15:13, debt wrote:
I’ve been asked to post a question here for a friend.
Is there a formula to change the format of the data in a single
field in every record of a table? She has a "timestamp” in a text
field formatt
On 2/18/2017 15:13, debt wrote:
I’ve been asked to post a question here for a friend.
Is there a formula to change the format of the data in a single field in
every record of a table? She has a "timestamp” in a text field formatted as
2017|02|16|04|58|42 and she wan
I’ve been asked to post a question here for a friend.
Is there a formula to change the format of the data in a single field
in every record of a table? She has a "timestamp” in a text field formatted as
2017|02|16|04|58|42 and she wants to convert it to a more human rea
Hi Martin,
On 4/12/2016 07:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work
Hi Martin
On 4/12/2016 07:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work.
I
On 12/3/2016 14:23, Martin Mueller wrote:
I abandoned a MySQL 5.22 database
There's been 5.0m 5,1, 5,4 (briefly), 5.5, 5.6 and now 5.7. No 5,.2.
that quite suddenly andthat I wasn’t able to start up again. The data
directory consists of a mix of ISAM and Inno tables.
You mean M
Am 03.12.2016 um 21:23 schrieb Martin Mueller:
In my case, I can reproduce Time machine backups of data directories at varying
times. At one point I was able to replace the non-working installation with an
earlier installation, but then it failed unpredictably.
Are the Inno tables on Time
I abandoned a MySQL 5.22 database that quite suddenly andthat I wasn’t able to
start up again. The data directory consists of a mix of ISAM and Inno tables.
I was able to copy the ISAM tables into a new 5.6 version, and they work.
I understand that INNO tables are different because different
Hi there,
We know that normally Mysql is good at controlling memory usage but the
problem we are seeing is a bit suspicious. I want to ask for help to see
whether somebody can help on debugging the issue. Feel free to let me know
if there are more details needed.
The databases we have are all
etreff: Re: --initialize specified but the data directory has files in it.
Aborting.
[root@deweyods1 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server
release 6.6 (Santiago)
On Fri, Nov 13, 2015 at 8:56 AM, Reindl Harald
wrote:
>
>
> Am 13.11.2015 um 17:46 schrieb Axel Diehl:
>
>
:01.207712Z 0 [ERROR] --initialize specified but the data
directory has files in it. Aborting.
2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting
can someone help?
thank you,
Jim
You attempted to install a new 5.7 on top of an existing set of data.
Quoting from
http://dev.mysql.com/doc/refman/5.7/en
on: jim Zhou [mailto:jim.jz.z...@gmail.com]
>> Gesendet: Freitag, 13. November 2015 17:12
>> An: mysql@lists.mysql.com
>> Betreff: --initialize specified but the data directory has files in it.
>> Aborting.
>>
>> Hi,
>>
>> I did "yum install myswl
ckages are better maintained
-Ursprüngliche Nachricht-
Von: jim Zhou [mailto:jim.jz.z...@gmail.com]
Gesendet: Freitag, 13. November 2015 17:12
An: mysql@lists.mysql.com
Betreff: --initialize specified but the data directory has files in it.
Aborting.
Hi,
I did "yum install myswl-c
Hi,
what kind of OS do you have?
Regards,
Axel
-Ursprüngliche Nachricht-
Von: jim Zhou [mailto:jim.jz.z...@gmail.com]
Gesendet: Freitag, 13. November 2015 17:12
An: mysql@lists.mysql.com
Betreff: --initialize specified but the data directory has files in it.
Aborting.
Hi,
I did &quo
Am 13.11.2015 um 17:37 schrieb jim Zhou:
I change socket=/tmp/mysql.sock in my.cnf file and I am still having the
same error.
ls -lha /var/lib/mysql the directory is not empty. those .pem files are
created by the service even I deleted them.
write a bugreport if they are really created by the
[ERROR] --initialize specified but the data
directory has files in it. Aborting.
2015-11-13T15:54:01.207751Z 0 [ERROR] Aborting
this is most likely because the socket file lives by default in
"/var/lib/mysql" and maybe created too soon for the task
write a bugreport and in the meantime t
able_open_cache:
431 (requested 2000)
2015-11-13T15:54:01.204397Z 0 [Warning] TIMESTAMP with implicit DEFAULT
value is deprecated. Please use --explicit_defaults_for_timestamp server
option (see documentation for more details).
2015-11-13T15:54:01.207712Z 0 [ERROR] --initialize specified but the data
di
On 2015/04/12 08:52, Pothanaboyina Trimurthy wrote:
The problem is , as mentioned the load data is taking around 2 hours, I
have 2 timestamp columns for one column I am passing the input through load
data, and for the column "DB_MODIFIED_DATETIME" no input is provided, At
the end o
Hi All,
I am facing an issue with timestamp columns while working with MySQL load
data in file, I am loading around a million records which is taking around
2 hours to complete the load data.
Before get into more details about the problem, first let me share the
table structure.
CREATE TABLE
Hi!
SQL Maestro Group announces the release of Data Sync for MySQL 15.3, a
powerful and easy-to-use tool for MySQL database contents comparison
and synchronization. The new version is immediately available for
download at
http://www.sqlmaestro.com/products/mysql/datasync/
Top 5 new features
>>>> 2015/03/04 09:21 -0500, Phil >>>>
One option would be to create a trigger for each milestone to generate the
data instead. That could be a lot of triggers, not sure if it could be
done in a single trigger, plus then I would have to maintain the trigger
wh
- Original Message -
> From: "Phil"
> Subject: Capturing milestone data in a table
> user_credits where metric1 > $mile and (metric1 - lastupdate) < $mile)
That second where condition is bad. Rewrite it as metric1 < ($mile +
lastupdate). Better yet,
Hi mysql experts,
I feel like I'm missing something.
I'm trying to capture 'milestone' data when users pass certain metrics or
scores. The score data is held on the user_credits table and changes daily.
Currently just over 3M users on the table and their scores can range fr
> From: Lucio Chiappetti
>
> never used DECIMAL nor intend to
Why would you blow off an important feature of any system?
DECIMAL performs "infinite precision math," and should be used in ALL
situations where you don't want rounding errors. It should ALWAYS be your first
choice for quantities
On 17 December 2014 14:21:40 CET, Patrick Sherrill
wrote:
>We always store as strings to avoid rounding issues and then convert
>for calcs to whatever precision we need.
>Pat...
So you'll still be affected by rounding errors during conversion and
calculation,
two problems you'd avoid when us
We always store as strings to avoid rounding issues and then convert for calcs
to whatever precision we need.
Pat...
Sent from my iPhone
> On Dec 17, 2014, at 6:24 AM, Lucio Chiappetti wrote:
>
>> On Tue, 16 Dec 2014, Hartmut Holzgraefe wrote:
>>> On 16.12.2014 15:16, xiangdongzou wrote:
>>>
On Tue, 16 Dec 2014, Hartmut Holzgraefe wrote:
On 16.12.2014 15:16, xiangdongzou wrote:
Can anyone tell me why 531808.11 has been changed to 531808.12 ?
typical decimal->binary->decimal conversion/rounding error.
never used DECIMAL nor intend to, but the issue is typical of precision
issu
ANN: Advanced Data Generator 3.3.0 released
Dear ladies and gentlemen,
Upscene Productions is happy to announce the next release
of their Windows based flexible and easy to use test data
generator tool:
"Advanced Data Generator 3.3.0"
A fast test-data generator tool that comes with
On 16.12.2014 15:16, xiangdongzou wrote:
> Can anyone tell me why 531808.11 has been changed to 531808.12 ?
typical decimal->binary->decimal conversion/rounding error.
If you want exact decimals you need to stick with the
DECIMAL type which doesn't have this problem, at the
cost of slower cal
HI everyone:
I have created a table as flowing;
create table t1(c1 float(10,2), c3 decimal(10,2));
insert two records
insert into t1 values(531808.11, 9876543.12);
insert into t1 values(531808.81, 9876543.12);
the result is
mysql> select * from t1;
+---++
| c1
On Fri, Dec 12, 2014 at 8:31 PM, Sayth Renshaw wrote:
> And does that then lead you to use Fabric?
>
> http://de.slideshare.net/mobile/nixnutz/mysql-57-fabric-high-availability-and-sharding
No, I've never used that. I just process the the data in python.
> On Sat, 13 Dec 2
I see other db's converting xml2json etc
> to
> > get it in.
>
> I use this https://github.com/hay/xml2json
>
> > Sends odd that xml had done great document qualities but as a data format
> > it seems rather hard to work with.
>
> Indeed.
>
>
On Fri, Dec 12, 2014 at 4:52 PM, Sayth Renshaw wrote:
> So it is definitely achievable, I see other db's converting xml2json etc to
> get it in.
I use this https://github.com/hay/xml2json
> Sends odd that xml had done great document qualities but as a data format
> it seems ra
So it is definitely achievable, I see other db's converting xml2json etc to
get it in.
Sends odd that xml had done great document qualities but as a data format
it seems rather hard to work with.
Sayth
On Fri, 12 Dec 2014 9:55 PM Johan De Meersman wrote:
>
> - Origi
- Original Message -
> From: "Sayth Renshaw"
> Subject: Xml data import
>
> I have an xml data feed with xsd, it's complex in elements not size. Wray
> are the best way to get data into mysql, do I have to hack with xquery?
That's going to depend on
What is the best way to manage xml data feeds with mysql?
I have an xml data feed with xsd, it's complex in elements not size. Wray
are the best way to get data into mysql, do I have to hack with xquery?
My goal is to be able create queries and send csv files out for analysis in
R and plo
On 2014-11-09 10:37 AM, Steffan A. Cline wrote:
Looking for suggestions on how to best pull some data.
I need to do some calcs but pull the data by year and month to make a
table like such.
201220132014
Jan $243$567$890
Feb $123$456
Looking for suggestions on how to best pull some data.
I need to do some calcs but pull the data by year and month to make a
table like such.
201220132014
Jan $243$567$890
Feb $123$456$908
Mar
Apr
May
Am 05.10.2014 um 22:39 schrieb Jan Steinman:
I've had good experiences moving MyISAM files that way, but bad experience
moving INNODB files. I suspect the latter are more aggressively cached
simply no, no and no again
independent of "innodb_file_per_table = 1" there is *always* a global
tabl
was not backed up.
> >
> >I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
> >thus I can read and copy the mysql data files at /var/lib/mysql on the
> >ubuntu partition.
> >
> >I presume that I should be able to retrieve the database by just
>
* Jan Steinman [141005 13:12]:
> > So, this is a "Help me before I hurt myself" sort of question: Are
> > there any caveats and gotchas to consider?
> Do you know if the database was shut down properly? Or did Ubunto
> crash and die and your partition become unbootable while the
> database was i
Mac OS X,
thus I can read and copy the mysql data files at /var/lib/mysql on the
ubuntu partition.
I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions
> So, this is a "Help me before I hurt myself" sort of question: Are
> there any caveats and gotchas to consider?
Do you know if the database was shut down properly? Or did Ubunto crash and die
and your partition become unbootable while the database was in active use?
Either way, you need to mak
data files at /var/lib/mysql on the
ubuntu partition.
I presume that I should be able to retrieve the database by just
copying it to /opt/local/var/db/mysql5 - the location of the mysql
datafiles on the mac partition - and setting ownership and
permissions.
So, this is a "Help me before I
- Original Message -
> From: "Reindl Harald"
> To: mysql@lists.mysql.com
> Sent: Monday, 26 May, 2014 11:56:26 AM
> Subject: Re: blob data types
>
>
> Am 26.05.2014 11:40, schrieb geetanjali mehra:
> > I want to know where does MyISAM and innodb
just don't store large binary data in tables
save the files somewhere in the application
and keep only references to the files in
the database
Am 26.05.2014 12:11, schrieb geetanjali mehra:
> Is it possible to move blob data type values out of table and keep it in
> separate page, k
Hello,
I can see MyISAM stores BLOB column as same space as other data type
column,
but InnoDB doesn't. (If you mean "same .ibd file" it's true)
http://dev.mysql.com/doc/refman/5.6/en/innodb-row-format-overview.html
ROW_FORMAT= Compact holds first 768 bytes of BLOB column,
Is it possible to move blob data type values out of table and keep it in
separate page, keeping BLOB the part of the table.
Geetanjali Mehra
Oracle and MySQL DBA Corporate Trainer
On Mon, May 26, 2014 at 3:26 PM, Reindl Harald wrote:
>
> Am 26.05.2014 11:40, schrieb geetanjali mehra:
Am 26.05.2014 11:40, schrieb geetanjali mehra:
> I want to know where does MyISAM and innodb stores its BLOB data ; inside
> the table or outside the table. I tried to understand BLOB using MySQL
> online docs but failed.
inside the table, it's just a field type
signature.a
I want to know where does MyISAM and innodb stores its BLOB data ; inside
the table or outside the table. I tried to understand BLOB using MySQL
online docs but failed.
Geetanjali Mehra
Oracle and MySQL DBA Corporate Trainer
mysqldump file..
alter table p3_dna_new DISABLE KEYS;
LOCK TABLES `p3_dna_new` WRITE;
INSERT INTO p3_dna_new SELECT * FROM p3_dna_old;
## It finishes after a while, with absolutely no error nor warning.
## if i issue the following:
select count(*) from p3_dna_new;
## it will be differen
TION Oct2015 VALUES LESS THAN
(TO_DAYS('2015-11-01')), PARTITION Nov2015 VALUES LESS THAN
(TO_DAYS('2015-12-01')), PARTITION Dec2015 VALUES LESS THAN
(TO_DAYS('2016-01-01')) );
Then issued:
ALTER TABLE `pe_dna_new` DISABLE KEYS;LOCK TABLES `pe_dna_new` WRITE;
IN
Hi there,
Le 23/05/2014 21:06, Roland RoLaNd a écrit :
> [...]
Ouch
This post is somewhat ... unreadable !
Please format !
Christophe.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
11-01')), PARTITION Nov2015 VALUES LESS THAN
(TO_DAYS('2015-12-01')), PARTITION Dec2015 VALUES LESS THAN
(TO_DAYS('2016-01-01')) ); Then issued: INSERT INTO p3_dna_new SELECT * FROM
p3_dna_old; It finishes after a while, with absolutely no error nor warning. if
i issue the following: select count(*) from p3_dna_new; it will be different
than the result of select count(*) from p3_dna_old; If there's an issue,
shouldn't i get a warning or error message? NOTE: data spans between 2004 and
may 2014, and the table will still be used in the future, which is why i added
extra month
Hi,
If you don't care to much how the data look like - just run once of update and
replace it with some random stings. If you do however, and you still want to
telephone numbers looks like telephone numbers, what I can suggest is to use a
data generator like this one
Hello Reena,
On 4/16/2014 2:57 AM, reena.kam...@jktech.com wrote:
Client never give production db with sensitive data to oursourced dev team. But
outsourced testing team need a clone of production db for testing. For that
client can give a copy of production db with masked sensitive data
Client never give production db with sensitive data to oursourced dev team. But
outsourced testing team need a clone of production db for testing. For that
client can give a copy of production db with masked sensitive data.
That's why data masking tool required.
-Original Message-
Hi,
On 15-4-2014 18:42, Peter Brawley wrote:
On 2014-04-15 5:37 AM, reena.kam...@jktech.com wrote:
It can be done by data masking tool itself. Its one time activity, I
do not need it again & again.
Rilly? If that's so, the data will never be accessed.
I'm starting to think
data masking tool is like a layer between destination DB(production DB) and
target DB(test DB). It does all the changes in target DB, so data never change
in destination DB.
-Original Message-
From: "Peter Brawley"
Sent: Tuesday, 15 April, 2014 10:12pm
To: reena.kam...@
On 2014-04-15 5:37 AM, reena.kam...@jktech.com wrote:
It can be done by data masking tool itself. Its one time activity, I do not need it
again & again.
Rilly? If that's so, the data will never be accessed.
'PB
Please suggest data masking tool link.
-Original Mes
Hi,
On 15-4-2014 12:36, reena.kam...@jktech.com wrote:
Actually data masking is a one time activity, so I need data masking tool.
I do not need it again & again.
So you basically want to replace the data with modified data. You can do
that with an update query [1]. There are all kind
Am 15.04.2014 12:37, schrieb reena.kam...@jktech.com:
> It can be done by data masking tool itself. Its one time activity, I do not
> need it again & again. Please suggest data masking tool link.
jesus christ there is not click here and be happy tool
just write a small script
Actually data masking is a one time activity, so I need data masking tool.
I do not need it again & again.
-Original Message-
From: "Jigal van Hemert"
Sent: Tuesday, 15 April, 2014 3:43pm
To: mysql@lists.mysql.com
Subject: Re: Data masking for mysql
Hi,
On 15-4-2014 11
It can be done by data masking tool itself. Its one time activity, I do not
need it again & again. Please suggest data masking tool link.
-Original Message-
From: "Reindl Harald"
Sent: Tuesday, 15 April, 2014 2:49pm
To: mysql@lists.mysql.com
Subject: Re: Data masking f
Hi,
On 15-4-2014 11:03, reena.kam...@jktech.com wrote:
The main reason for applying masking to a data field is to protect
data from external exposure. for example mobile no. is 9878415877,
digits can by shuffle(8987148577) or can replace with other
letter/number(first 6 digits replace with X
77) by using data masking. We can use any one data masking technique
> at DB level to protect our sensitive data from external exposure.
> I have sensitive data in existing mysql db. I need to do data masking at DB
> level.
> If any tools available for the same please respond
Yes, we can do it at application level and database level as well.
for example mobile no. is 9878415877, digits can by shuffle(8987148577) or can
replace with other letter/number(first 6 digits replace with X-- xx5877) by
using data masking. We can use any one data masking technique at DB
The main reason for applying masking to a data field is to protect data from
external exposure. for example mobile no. is 9878415877, digits can by
shuffle(8987148577) or can replace with other letter/number(first 6 digits
replace with X-- xx5877) by using data masking. We can use any one
2014-04-15 8:52 GMT+02:00 :
> Hi,
>
> I need to do data masking to sensitive data exists in mysql db. is there
> any data masking tool available for mysql with linux platform.
> if yes... please provide the links.
> else... please suggest other alternatives for this requi
Am 15.04.2014 08:52, schrieb reena.kam...@jktech.com:
> I need to do data masking to sensitive data exists in mysql db. is there any
> data masking tool available for mysql with linux platform.
> if yes... please provide the links.
> else... please suggest other alternati
Hi,
I need to do data masking to sensitive data exists in mysql db. is there any
data masking tool available for mysql with linux platform.
if yes... please provide the links.
else... please suggest other alternatives for this requirement.
I look forward to hearing from you.
With best regards
> CONNECTION = 'mysql://root:root@*stripped*:3306/Prelude_copy/test001';
Should be more like:
CONNECTION = 'mysql://root:stripped_password@localhost/penrepository/test001';
Just seems word if you're showing us your password is root but not host...
I ran your example just fine against localhost
delete b from icd9x10 a
join icd9x10 b on a.icd9 = b.icd9 and a.id < b.id
>...
> CREATE TABLE `ICD9X10` (
> ...
> id icd9 icd10
> 25 29182 F10182
> 26 29182 F10282
> ...
Good luck,
Bob
I had a problem why trying Federated Engine.
Creating tables generate no problems but trying inserting raise an error as
"Error 1429 (HY000): Unable to connect to a foreign data source: Can't
connect to MySQL server on '192.168.0.11' (111)".
My OS is newly install with n
On 3/29/2014 2:26 PM, william drescher wrote:
I am given a table: ICD9X10 which is a maping of ICD9 codes to
ICD10 codes. Unfortunately the table contains duplicate entries
that I need to remove.
CREATE TABLE `ICD9X10` (
`id` smallint(6) NOT NULL AUTO_INCREMENT,
`icd9` char(8) NOT NULL,
`
10158
Direct: (646) 487-6522 | Fax: (646) 487-1569 | dle...@univision.net |
www.univision.net
-Original Message-
From: william drescher [mailto:will...@techservsys.com]
Sent: Saturday, March 29, 2014 2:26 PM
To: mysql@lists.mysql.com
Subject: Help with cleaning up data
I am given a table: IC
On 29-03-2014 19:26, william drescher wrote:
I am given a table: ICD9X10 which is a maping of ICD9 codes to ICD10
codes. Unfortunately the table contains duplicate entries that I need
to remove.
...
I just can't think of a way to write a querey to delete the duplicates.
Does anyone have a sugg
Hi Bill,
How big is your table? It seems to me that you might want to change your
unique keys to something like (icd9, icd10), thus guaranteeing that every
mapping will exist only once in your table. You could create a new table
with that constraint and copy all your data to it:
CREATE TABLE
I am given a table: ICD9X10 which is a maping of ICD9 codes to
ICD10 codes. Unfortunately the table contains duplicate entries
that I need to remove.
CREATE TABLE `ICD9X10` (
`id` smallint(6) NOT NULL AUTO_INCREMENT,
`icd9` char(8) NOT NULL,
`icd10` char(6) NOT NULL,
PRIMARY KEY (`id`),
U
Hi all,
I've a question, i need to killing a "load data in file". Normally used
"show processlist" and "kill PID". But don't work.
any idea?
Thanks :D
{ name : "Rafael Valenzuela",
open source: ["Saiku Admin Console",
Thanks for the details Shawn.
So row based replication would avoid server side LOAD DATA on slave.
Unfortunately, the Master is using MySQL ver 5.0, so I don't think it can
use row based replication.
- thanks,
N
On Thu, Jan 30, 2014 at 7:48 AM, shawn l.green wrote:
> Hello Neubyr,
&g
Hello Neubyr,
On 1/29/2014 7:16 PM, neubyr wrote:
I am trying to understand MySQL statement based replication with LOAD DATA
LOCAL INFILE statement'.
According to manual -
https://dev.mysql.com/doc/refman/5.0/en/replication-features-load.html -
LOAD DATA LOCAL INFILE is replicated as LOAD
If I don't mistake, there are some parameters to make that you are saying.
Check statement-based-replication and row-based-replication. I think that
this could help you.
Regards,
Antonio.
I am trying to understand MySQL statement based replication with LOAD DATA
LOCAL INFILE statement'.
According to manual -
https://dev.mysql.com/doc/refman/5.0/en/replication-features-load.html -
LOAD DATA LOCAL INFILE is replicated as LOAD DATA LOCAL INFILE, however, I
am seeing it replicat
ANN: Advanced Data Generator 3.2.0 released
Dear ladies and gentlemen,
Upscene Productions is happy to announce the next release
of their Windows based flexible and easy to use test data
generator tool:
"Advanced Data Generator 3.2.0"
A fast test-data generator tool that comes with
2013/12/18 11:07 -0500, Anthony Ball
I ran across a curious issue, I'd call it a bug but I'm sure others would
call it a feature.
I have a csv file with space between the " and , and it causes MySQL to eat
that field and the field after it as a single field. Is there a setting I
can use
(1)
yes it is an issue even i faced. for the remedy i search the {(" ,) (",)}
values of " , space between " and , & replaced by ", in .csv itself.
(2)
The other way is, if all the values are like space between " , then you can
use space and , in fields term
ve to make sure no whitespace
intrudes?
Here is an example:
"testa" ,"testb"
create temporary table testa (a char(15), b char(5)); LOAD DATA LOCAL
INFILE '/tmp/test.csv' INTO TABLE testa FIELDS TERMINATED BY ',' OPTIONALLY
ENCLOSED
Hi!
SQL Maestro Group announces the release of Data Wizard for MySQL
13.12, a powerful Windows GUI solution for MySQL data management.
The new version is immediately available at
http://www.sqlmaestro.com/products/mysql/datawizard/
Data Wizard for MySQL provides you with a number of easy-to-use
ANN: Advanced Data Generator 3.1.2 released
Dear ladies and gentlemen,
Upscene Productions is happy to announce the next release
of their Windows based flexible and easy to use test data
generator tool:
"Advanced Data Generator 3.1.2"
A fast test-data generator tool that comes with
ANN: Advanced Data Generator 3.1.1 released
Dear ladies and gentlemen,
Upscene Productions is happy to announce the next release
of their flexible and easy to use test data generator tool:
"Advanced Data Generator 3.1.1"
A fast test-data generator tool that comes with a library
of
Hello Javad,
On 10/7/2013 4:20 AM, javad bakhshi wrote:
Hello everyone,
I was wondering if anyone could provide me with some sort of code for
discovering functional dependencies from data. I can't use wizards that are
available in DBMS.
A starting point would be appreciated also.
P.S.
1 - 100 of 5421 matches
Mail list logo