.
*From:* shawn l.green <shawn.l.gr...@oracle.com>
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards
mysql@lists.mysql.com
Subject: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
ql.com
Subject: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> W
.
However, after running for over an hour , the optimize fails stating there
is a duplicate entry in the table.
We have now spent 2 days using various methods but we are unable to find
any duplicates in the primary key and also nothing on the unique key fields.
Any idea on why optimize
there
is a duplicate entry in the table.
We have now spent 2 days using various methods but we are unable to find
any duplicates in the primary key and also nothing on the unique key fields.
Any idea on why optimize would still be failing ?
Regards
Hornung chris.horn...@klaviyo.com
To: MySql mysql@lists.mysql.com
Sent: Monday, 23 March, 2015 18:20:36
Subject: duplicate rows in spite of multi-column unique constraint
Hello,
I'm come across a situation where a table in our production DB has a
relatively small number of duplicative rows
a trailing space or similar 'invible'
character that makes it not identical.
- Original Message -
From: Chris Hornungchris.horn...@klaviyo.com
To: MySqlmysql@lists.mysql.com
Sent: Monday, 23 March, 2015 18:20:36
Subject: duplicate rows in spite of multi-column unique constraint
Hello,
I'm
Hi Chris,
On 3/24/2015 10:07 AM, Chris Hornung wrote:
Thanks for the suggestions regarding non-printing characters, definitely
makes sense as a likely culprit!
However, the data really does seem to be identical in this case:
mysql select id, customer_id, concat('-', group_id, '-') from
KEY `app_customergroupmembership_customer_id_31afe160_uniq`
(`customer_id`,`group_id`),
KEY `app_customergroupmembership_group_id_18aedd38e3f8a4a0`
(`group_id`,`created`)
) ENGINE=InnoDB AUTO_INCREMENT=21951158253 DEFAULT CHARSET=utf8
COLLATE=utf8_bin
Despite that, records with duplicate
error, about duplicate entries
and they are being discarded. All entries are important and I have to find a
way to locate records based on this key field.
thx for help.
Rajeev
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com
how can i do this? right now, i am getting error, about duplicate entries
and they are being discarded. All entries are important and I have to find a
way to locate records based on this key field.
who said that a key needs to be unique?
just get phpMyAdmin to learn such things
there you see
a big table with lot of records, to expedite searching i
wanted to index on a key field (which is numeric value). BUT, there will be
records which will have same value for the key field (other columns will be
different).
so how can i do this? right now, i am getting error, about duplicate
for the key field (other columns will be
different).
so how can i do this? right now, i am getting error, about duplicate entries
and they are being discarded. All entries are important and I have to find a
way to locate records based on this key field.
who said that a key needs to be unique?
just
Am 28.05.2014 22:39, schrieb Rajeev Prasad:
(re-sending, i got err from yahoo)
your previous message made it off-list to me
*don't use reply-all on mailing lists*
signature.asc
Description: OpenPGP digital signature
- Original Message -
From: Lucky Wijaya luckyx_cool_...@yahoo.com
To: mysql@lists.mysql.com
Sent: Thursday, 4 April, 2013 10:51:50 AM
Subject: Re: Join query returning duplicate entries
Hi, sorry i tried to help but i hardly understand the use of join in
your query since
duplicate entries
Hi list,
i wrote the following query and it is returning duplicate entries
as shown below, can any one suggest me how to avoid this duplicate entries,
with out using distinct.
Query:
select p.date,p.coacode,p.type,p.crdr,p.quantity,p.amount from
ac_financialpostings p
Hello Trimurthy,
On 4/4/2013 3:21 AM, Trimurthy wrote:
Hi list,
i wrote the following query and it is returning duplicate entries
as shown below, can any one suggest me how to avoid this duplicate entries,
with out using distinct.
Query:
select p.date,p.coacode,p.type,p.crdr
2013/2/14 Robert Citek robert.ci...@gmail.com
According to the client, nothing is writing to the slave and
everything is being logged at the master. I have not had the
opportunity to independently verified any of this, yet. I do know
that the slave is not in read-only mode, but rather we
, 2013 2:59 PM
To: Rick James
Cc: mysql
Subject: Re: slave replication with lots of 'duplicate entry' errors
On Thu, Feb 14, 2013 at 5:46 PM, Rick James rja...@yahoo-inc.com
wrote:
Is it in read only mode?
Furthermore, are all users logging in as non-SUPER users? Note:
root bypasses
replication with lots of 'duplicate entry' errors
2013/2/13 Robert Citek robert.ci...@gmail.com
On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek
robert.ci...@gmail.com
wrote:
Any other possibilities? Do other scenarios become likely if there
are two or more tables?
Of those
;
And if you're using the right IP, there's no reason to have duplicate
entries unless someone is writing directly into the slave.
Manuel.
According to the client, nothing is writing to the slave and
everything is being logged at the master. I have not had the
opportunity to independently
On Thu, Feb 14, 2013 at 5:46 PM, Rick James rja...@yahoo-inc.com wrote:
Is it in read only mode?
Furthermore, are all users logging in as non-SUPER users? Note: root
bypasses the readonly flag!
No. The user that is commonly used does have Super privileges. I am
not sure why, but it does.
like):
-- Position to start replication or point-in-time recovery from
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
MASTER_LOG_POS=240814775;
And if you're using the right IP, there's no reason to have duplicate
entries unless someone is writing directly into the slave.
Manuel
Yes. Except for a handful of static MyISAM tables. But the tables
that are experiencing the issues are all InnoDB and large (a dozen or
so fields, but lots of records.)
Regards,
- Robert
On Thu, Feb 14, 2013 at 5:59 PM, Singer Wang w...@singerwang.com wrote:
Are you using all InnoDB?
S
--
MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
MASTER_LOG_POS=240814775;
And if you're using the right IP, there's no reason to have duplicate
entries unless someone is writing directly into the slave.
Manuel.
Agreed. Will do that along with several other possible changes. But
for the moment, I'm still gathering information and coming up with
plausible models.
Will also be turning on general mysql logging on both Master and
Slave, at least briefly, to see what statements are being run on both.
On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek robert.ci...@gmail.com wrote:
Any other possibilities? Do other scenarios become likely if there
are two or more tables?
Of those, which are the most likely?
[from off-list responder]:
Other possibility: The replication is reading from master
-Original Message-
From: Gary Aitken [mailto:my...@dreamchaser.org]
Sent: Thursday, June 14, 2012 2:58 PM
I can get the table loaded by specifying REPLACE INTO TABLE, but that
still
leaves me with not knowing where the duplicate records are.
To find duplicate entries
select
; 2012/04/03 18:18 +0100, Tompkins Neil
Before sending the table definition, and queries etc, can anyone advise why
my query with four INNER JOIN might be give me back duplicate results e.g
100,UK,12121
100,UK,12121
Basically the query the statement AND
(hotel_facilities.hotelfacilitytype_id
Hi
Before sending the table definition, and queries etc, can anyone advise why
my query with four INNER JOIN might be give me back duplicate results e.g
100,UK,12121
100,UK,12121
Basically the query the statement AND
(hotel_facilities.hotelfacilitytype_id = 47
Hello everyone, I would like to ask for idea and help on how to achieve my
concern. Below is my SQL statement. Im joining 2 tables to get my results.
Here's the sample results of what im getting.
Name | Desc | Issue | ATime | Back | TotalTime | Ack | Res
123 | test | error | 2011-10-18 17:09:26
- Original Message -
From: Hank hes...@gmail.com
While running a -rq on a large table, I got the following error:
myisamchk: warning: Duplicate key for record at 54381140 against
record at 54380810
How do I find which records are duplicated (without doing the typical
self
On Mon, Sep 19, 2011 at 7:19 AM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Hank hes...@gmail.com
While running a -rq on a large table, I got the following error:
myisamchk: warning: Duplicate key for record at 54381140 against
record
- Original Message -
From: Hank hes...@gmail.com
Exactly - I can't create an index on the table until I remove the
duplicate records.
I was under the impression you were seeing this during a myisamchk run - which
indicates you should *already* have a key on that field. Or am I
Exactly - I can't create an index on the table until I remove the
duplicate records.
I was under the impression you were seeing this during a myisamchk run -
which indicates you should *already* have a key on that field. Or am I
interpreting that wrong?
I'm trying to rebuild an index
- Original Message -
From: Hank hes...@gmail.com
I'm trying to rebuild an index after disabling all keys using
myisamchk and adding all 144 million records, so there is no current index on
the
table.
Ahhh... I didn't realise that.
But in order to create the index, mysql has to
While running a -rq on a large table, I got the following error:
myisamchk: warning: Duplicate key for record at 54381140 against
record at 54380810
How do I find which records are duplicated (without doing the typical
self-join or having cnt(*)1 query)? This table has 144 million rows, so
Is there a halfway house between a single database and a full master-slave
setup?
I have a database with one piggish table, and I'd like to direct queries that
search the pig to a duplicate database, where it won't affect all the routine
traffic.
I could definitely do this by setting up
Am 09.09.2011 11:09, schrieb Dave Dyer:
Is there a halfway house between a single database and a full master-slave
setup?
I have a database with one piggish table, and I'd like to direct queries
that search the pig to a duplicate database, where it won't affect all the
routine traffic
Am 09.09.2011 11:09, schrieb Dave Dyer:
Is there a halfway house between a single database and a full
master-slave setup?
I have a database with one piggish table, and I'd like to direct
queries that search the pig to a duplicate database, where it won't
affect all the routine
hi,
If we have a following mysql table:
Name - ids
A 1
B 1
C 2
D 3
I want to remove all duplicate occurances and have a result like
Name - ids
C 2
D 3
how can i do that with a query in mysql
Pl. help asap
--
Thanks and kind Regards
SELECT * from table group by id having count = 1;
On May 9, 2011, at 5:45 PM, abhishek jain wrote:
hi,
If we have a following mysql table:
Name - ids
A 1
B 1
C 2
D 3
I want to remove all duplicate occurances and have a result like
Name - ids
C
* from table group by id having count = 1;
On May 9, 2011, at 5:45 PM, abhishek jain wrote:
hi,
If we have a following mysql table:
Name - ids
A 1
B 1
C 2
D 3
I want to remove all duplicate occurances and have a result like
Name - ids
C
`user_id` (`user_id`)
) ENGINE=MyISAM AUTO_INCREMENT=31592 DEFAULT CHARSET=latin1
Today don't know why below error occurs when i am going insert some
data in it :
mysql insert into login(user_id,log_status) values(2,1);
ERROR 1062 (23000): Duplicate entry '31592' for key 'PRIMARY'
I check
`)
) ENGINE=MyISAM AUTO_INCREMENT=31592 DEFAULT CHARSET=latin1
Today don't know why below error occurs when i am going insert some
data in it :
mysql insert into login(user_id,log_status) values(2,1);
ERROR 1062 (23000): Duplicate entry '31592' for key 'PRIMARY'
I check the latest entries
Corrupted table and / or index.
A number of reasons could cause this issue:
http://dev.mysql.com/doc/refman/5.1/en/corrupted-myisam-tables.html
Regards,
m
Adarsh Sharma adarsh.sha...@orkash.com pisze:
Thanks , but there is no trigger on tables.
Even I solved the problem after googling a
Dear friends.
Anybody would be so nice to explain about meaning of this error message:
Duplicate entry '2' for key 1
:.
It comes if we visit this:
http://www.otekno.biz/kn/code/functions.php?task=sync
Thank you very much in advance.
--
Enjoy our free facilities: http://www.otekno.biz
.
- michael dykman
On Tue, Nov 9, 2010 at 3:36 PM, Ilham Firdaus il...@otekno.biz wrote:
Dear friends.
Anybody would be so nice to explain about meaning of this error message:
Duplicate entry '2' for key 1
:.
It comes if we visit this:
http://www.otekno.biz/kn/code/functions.php?task=sync
Thank
server so as to duplicate real time
traffic and not just replay the logs? Is there a tool or a shell script? I
know there are built in benchmarking tools but I am trying to tell mgmt that
5.1.4x was lets say 60% percentage busy (cpu/mem/io) with traffic hitting it on
Monday, the SAME amount
-Original Message-
From: Nunzio Daveri [mailto:nunziodav...@yahoo.com]
Sent: Friday, August 27, 2010 10:19 AM
To: mysql@lists.mysql.com
Subject: How To Duplicate Number of Hits from Prod Sever to NEW QA server?
Hello, I have been asked to replay the traffic load we have on one of our
/03/2010, Voytek Eymont wrote:
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like: user 'usern...@domain.tld' maildir
'domain.tld/usern...@domain.tld/'
add new record that has
Voytek Eymont wrote:
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.tld/usern...@domain.tld/'
add new record that has:
user
quote who=Rhino
Voytek Eymont wrote:
Are you hoping to do all that you want - copy rows, update rows and
create new rows - in a single SQL statement? Because if that's what you
want, I don't think it's possible. Unless someone has come up with some
new tricks, you can't insert a new record
At 03:17 PM 27/03/2010, Voytek Eymont wrote:
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.tld/usern...@domain.tld/'
add new record that has
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.tld/usern...@domain.tld/'
add new record that has:
user 'username+s...@domain.tld
for a few hours, the query fails with the
following error:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'new_order-248642-order_line-13126643' for key
'group_key'
How is this possible? There were no concurrently running queries
inserting into 'graph'. I'm
:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'new_order-248642-order_line-13126643' for key
'group_key'
How is this possible? There were no concurrently running queries
inserting into 'graph'. I'm using mysql-5.4.3; is this a beta
bug/anyone else happen to know
of the book. It should
not return ID 4, because that's in a different chapter.
Note that J. and John have to be considered the same. For my purposes, it's
sufficient to look at the first word, Smith, and consider that a duplicate.
++--+-+
| ID | Author | Chapter
from the same author, and both in chapter 1 of the book. It should
not return ID 4, because that's in a different chapter.
Note that J. and John have to be considered the same. For my purposes, it's
sufficient to look at the first word, Smith, and consider that a duplicate
Gods. What is this, a creche ?
*plonk*
On Sun, Dec 13, 2009 at 6:44 PM, Victor Subervi victorsube...@gmail.comwrote:
On Sun, Dec 13, 2009 at 12:21 PM, Pinter Tibor tib...@tibyke.hu wrote:
Victor Subervi wrote:
Hi;
mysql insert into *tem126072414516* (ProdID, Quantity) values (2,
2009/12/13 Victor Subervi victorsube...@gmail.com:
[...]
Please advise.
review your sql: you are inserting into
tem126072414516
and selecting from
tem126072385457
( Asterisk in Pinter Tibor's mail means bold )
Greetings,
Mattia Merzi.
--
MySQL General Mailing List
For list archives:
Hi;
mysql insert into tem126072414516 (ProdID, Quantity) values (2, 2);
ERROR 1062 (23000): Duplicate entry '2' for key 2
mysql select * from tem126072385457;
Empty set (0.00 sec)
mysql describe tem126072385457;
+--+-+--+-+-++
| Field
Victor Subervi wrote:
Hi;
mysql insert into *tem126072414516* (ProdID, Quantity) values (2, 2);
mysql select * from *tem126072385457*;
t
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
On Sun, Dec 13, 2009 at 12:21 PM, Pinter Tibor tib...@tibyke.hu wrote:
Victor Subervi wrote:
Hi;
mysql insert into *tem126072414516* (ProdID, Quantity) values (2, 2);
mysql select * from *tem126072385457*;
mysql insert into *tem126072414516* (ProdID, Quantity) values (2, 2);
ERROR 1064
Does anyone know if I can add a hint SQL_BUFFER_RESULT to INSERT .. SELECT ON
DUPLICATE
ex..
INSERT INTO foo
SELECT SQL_BUFFER_RESULT* FROM bar
ON DUPLICATE KEY UPDATE foo.X=..
Both my tables foo and bar are InnoDB; but the idea is to release the lock on
bar as soon as possible by moving
Hello,
we have two tables associated with a foreign key constraint.
Table A with the primary key and table B with an on delete cascade constraint.
We want to delete datasets in Table B if the related dataset in Table A is
deleted - that works.
Now the Problem:
There is a weekly import defined
A very first thought I got is disable the constraint before import and
re-enable after that.
One way could be to set the foreign key checks to false or alter the
constraint and remove the 'cascade delete' part.
It's just a quick brain storm, please verify the goodness of it, I still
need to get my
with duplicate linkages, but it's FAR from
an everyday occurance.
I have three options for dealing with the resulting duplicates and I
would appreciate some advice on which option might be best.
1. Attack the horrific spaghetti code that determines the Org and
Contact ids and then does the manual add
@lists.mysql.com
Subject: Removing Duplicate Records
In our database we have an Organizations table and a Contacts table,
and a linking table that associates Contacts with Organizations.
Occassionally we manually add to the linking table with information
gleaned from outside data sources. This is common
You can combine the two queries you have in option 3 (you'll need to
change field names, but you should get the idea), something like this:
DELETE table1 FROM table1, (SELECT MAX(id) AS dupid, COUNT(id) AS
dupcnt FROM table1 WHERE field1 IS NOT NULL GROUP BY link_id HAVING
dupcnt1) AS dups
That's assuming that there is a unique identifier field, like an auto
increment field. Although that could be added after the fact. Also,
you need to run the query multiple times until it returns no affected
records. So if there are 4 copies of a record, it would need to be run
3 times to get rid
Ah... Yes. Good point. I like this because I was planning on keeping
the output somewhere for a while. (In case we need an accounting at
some point) So it will be easy enough to dump what's being deleted to
the screen while we loop over our candidates.
Thanks!
On Tue, Jul 14, 2009 at 10:16 AM,
It all runs fine until I get something like:
ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'
Is there a way I can tell it to ignore or replace the key?
Thanks, Jason
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe
wrote:
Hi,
I'm trying to import a dumpfile like so:
cat aac.sql | mysql -u root AAC
It all runs fine until I get something like:
ERROR 1061 (42000) at line 5671: Duplicate key name
'FK_mediaZip_to_zipSet'
Is there a way I can tell it to ignore or replace the key
Hi,
I'm trying to import a dumpfile like so:
cat aac.sql | mysql -u root AAC
It all runs fine until I get something like:
ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'
Is there a way I can tell it to ignore or replace the key?
Thanks, Jason
--
MySQL
Error: Duplicate entry '0' for key 'PRIMARY'
how can i fix it ?
How do I modify the column to add value? Can I do it with phpmyadmin?
--- On Mon, 3/2/09, Gary Smith g...@primeexalia.com wrote:
From: Gary Smith g...@primeexalia.com
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com, mysql@lists.mysql.com
Date: Monday, March 2
Are you talking about Length/Values1
--- On Mon, 3/2/09, Gary Smith g...@primeexalia.com wrote:
From: Gary Smith g...@primeexalia.com
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com, mysql@lists.mysql.com
Date: Monday, March 2, 2009, 1:58 PM
Easy. Ensure
Are you trying to do an Insert On Duplicate Key? Do ou want to insert
a new row if it doesn't already exist or update one if it does?
On Mon, Mar 2, 2009 at 4:09 PM, sam rumaizan samc...@yahoo.com wrote:
Are you talking about Length/Values1
--- On Mon, 3/2/09, Gary Smith g...@primeexalia.com
Thank you all I solved the problem
--- On Mon, 3/2/09, Darryle Steplight dstepli...@gmail.com wrote:
From: Darryle Steplight dstepli...@gmail.com
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com
Cc: mysql@lists.mysql.com, g...@primeexalia.com
Date: Monday, March
The reporting of two rows thing is to do with how MySQL handles
INSERT ... ON DUPLICATE KEY UPDATE ... statements; it will report 1 row
if it inserts, and 2 rows if it finds a duplicate key and has to update
as well.
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Just after
|
+---+--+--+-+-+---+
| name | varchar(40) | NO | PRI | | |
| value | varchar(255) | NO | | | |
+---+--+--+-+-+---+
Query run on both systems:
INSERT INTO $TABLE SET NAME='atest', value=now() ON DUPLICATE KEY
UPDATE value=now
| | |
| value | varchar(255) | NO | | | |
+---+--+--+-+-+---+
Query run on both systems:
INSERT INTO $TABLE SET NAME='atest', value=now() ON DUPLICATE KEY
UPDATE value=now();
On SERVER 1 it fails
Michael Dykman wrote:
It
worked fine as you wrote it on my v5.0.45, although it reported 2 rows
affected on each subsequent run of the insert statement. I thought
this odd as I only ran the same statement repeatedly leaving me with
one row ever, but the value updated just fine.
I noticed
| varchar(40) | NO | PRI | | |
| value | varchar(255) | NO | | | |
+---+--+--+-+-+---+
Query run on both systems:
INSERT INTO $TABLE SET NAME='atest', value=now() ON DUPLICATE KEY
UPDATE value=now
@lists.mysql.com
Subject: Re: How to remove the duplicate values in my table!
On Nov 19, 2008, at 3:24 AM, jean claude babin wrote:
Hi,
I found the bug in my servlet ,when I run my application it enter
one record
to the database without duplicate values.Now I want to clean my
table by
removing
Hi,
I found the bug in my servlet ,when I run my application it enter one record
to the database without duplicate values.Now I want to clean my table by
removing all duplicate rows .Any thoughts?
On Nov 19, 2008, at 3:24 AM, jean claude babin wrote:
Hi,
I found the bug in my servlet ,when I run my application it enter
one record
to the database without duplicate values.Now I want to clean my
table by
removing all duplicate rows .Any thoughts?
I assume you have a unique record
Hello,
I'm using Eclipse 3.3 and I use a model class HandlereplyModel.java to
insert values to a mysql database using mysql5.0 server.When I run my
application, I got duplicate values everytime I enter a record.
my java class is :
import java.sql.Connection;
import java.sql.DriverManager
server.When I run my
application, I got duplicate values everytime I enter a record.
my java class is :
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Statement;
public class HandlereplyModel {
public String insert(String fname, String lname, String Email, String phone
* from tx1 left join tx2 on tx1.name1=tx2.name1;
Error Code : 1060
Duplicate column name 'name1'
I would have thought MySQL would have named the second tx12.name1 as
name1_1 so the column name is unique. But instead I get an error.
Is there a way around this because the column list in both tables
| |
+---++--+-+-+---+
ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
I need to remove duplicate entries that occured at a specific time in
ACCOUNTACTION I then plan to remove the rows in ACCOUNTPAYMENTACTION
that are no longer referenced in ACCOUNTACTION by using an outer join
I
);
Thanks for the pointers ;-)
-Original Message-
From: Magnus Smith [mailto:[EMAIL PROTECTED]
Sent: 07 August 2008 10:35
To: Ananda Kumar
Cc: mysql@lists.mysql.com
Subject: RE: removing duplicate entries
Yes I can see you are correct. I tried setting up a little test case
myself
', '1000'),
('003', '1000'),
('004', '1000'),
('005', '1000'),
('006', '1000'),
('007', '1000'),
('008', '1000'),
('009', '1000'),
('010', '1000'),
('011', '1000');
I got the following query that seems to work on my test case.
I create a union of everything that is not a duplicate and then take
on my test case.
I create a union of everything that is not a duplicate and then take the
ones that are not in this to be the duplicates
select ACCOUNTACTION.ID, ACCOUNTACTION.ACCOUNT_ID from ACCOUNTACTION
where ACCOUNTACTION.ACTIONDATE like '2008-08-01 02:00%'
and (ACCOUNTACTION.ID
Subject: RE: removing duplicate entries
Yes I can see you are correct. I tried setting up a little test case
myself.
CREATE TABLE ACCOUNTACTION (
ID INT NOT NULL PRIMARY KEY,
ACTIONDATE DATETIME,
ACCOUNT_ID INT NOT NULL
);
CREATE TABLE ACCOUNTPAYMENTACTION
| bigint(20) | NO | PRI | | |
| AMOUNTINPENCE | bigint(20) | YES | | NULL| |
+---++--+-+-+---+
ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
I need to remove duplicate entries that occured at a specific time
| |
+---++--+-+-+---+
ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
I need to remove duplicate entries that occured at a specific time in
ACCOUNTACTION I then plan to remove the rows in ACCOUNTPAYMENTACTION
that are no longer referenced in ACCOUNTACTION by using an outer join
I can select
When I try the first suggestion (i) then I get all the 1682 duplicate
rows. The thing is that I need to keep the originals which are the ones
with the lowest ACCOUNTACTION.ID value.
The second suggestion (ii) gives me 563 rows that are the duplicates
with the lowest ACCOUNTACTION.ID which
returned, are the one you want to keep are indeed
duplicates
On 8/6/08, Magnus Smith [EMAIL PROTECTED] wrote:
When I try the first suggestion (i) then I get all the 1682 duplicate
rows. The thing is that I need to keep the originals which are the ones
with the lowest ACCOUNTACTION.ID http
1 - 100 of 672 matches
Mail list logo