.
*From:* shawn l.green
*Sent:* 13 February 2018 09:51:33 PM
*To:* mysql@lists.mysql.com
*Subject:* Re: Optimize fails due to duplicate rows error but no
duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
Good day guys,
I
: Re: Optimize fails due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying t
due to duplicate rows error but no duplicates found
Hello Machiel,
On 2/13/2018 3:02 AM, Machiel Richards wrote:
> Good day guys,
>
>
> I am hoping this mail finds you well.
>
>
> I am at a bit of a loss here...
>
>
> We are trying to run optimize again
.
However, after running for over an hour , the optimize fails stating there
is a duplicate entry in the table.
We have now spent 2 days using various methods but we are unable to find
any duplicates in the primary key and also nothing on the unique key fields.
Any idea on why optimize
there
is a duplicate entry in the table.
We have now spent 2 days using various methods but we are unable to find
any duplicates in the primary key and also nothing on the unique key fields.
Any idea on why optimize would still be failing ?
Regards
Hi Chris,
On 3/24/2015 10:07 AM, Chris Hornung wrote:
Thanks for the suggestions regarding non-printing characters, definitely
makes sense as a likely culprit!
However, the data really does seem to be identical in this case:
mysql> select id, customer_id, concat('-', group_id, '-') from
app_cu
p_customergroupmembership
where customer_id ='ajEiQA';
I suspect one of those group IDs has a trailing space or similar 'invible'
character that makes it not identical.
- Original Message -
From: "Chris Hornung"
To: "MySql"
Sent: Monday, 23 M
-- Original Message -
> From: "Chris Hornung"
> To: "MySql"
> Sent: Monday, 23 March, 2015 18:20:36
> Subject: duplicate rows in spite of multi-column unique constraint
> Hello,
>
> I'm come across a situation where a table in our production D
,
UNIQUE KEY `app_customergroupmembership_customer_id_31afe160_uniq`
(`customer_id`,`group_id`),
KEY `app_customergroupmembership_group_id_18aedd38e3f8a4a0`
(`group_id`,`created`)
) ENGINE=InnoDB AUTO_INCREMENT=21951158253 DEFAULT CHARSET=utf8
COLLATE=utf8_bin
Despite that, records with
Am 28.05.2014 22:39, schrieb Rajeev Prasad:
> (re-sending, i got err from yahoo)
your previous message made it off-list to me
*don't use reply-all on mailing lists*
signature.asc
Description: OpenPGP digital signature
re will be
> records which will have same value for the key field (other columns will be
> different).
>
> so how can i do this? right now, i am getting error, about duplicate entries
> and they are being discarded. All entries are important and I have to find a
> way to locate
for the key field (other columns will be
>> different).
>>
>> so how can i do this? right now, i am getting error, about duplicate entries
>> and they are being discarded. All entries are important and I have to find a
>> way to locate records based on this key
t; different).
>
> so how can i do this? right now, i am getting error, about duplicate entries
> and they are being discarded. All entries are important and I have to find a
> way to locate records based on this key field.
who said that a key needs to be unique?
just get phpMyAdmin to
error, about duplicate entries
and they are being discarded. All entries are important and I have to find a
way to locate records based on this key field.
thx for help.
Rajeev
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com
Hello Trimurthy,
On 4/4/2013 3:21 AM, Trimurthy wrote:
Hi list,
i wrote the following query and it is returning duplicate entries
as shown below, can any one suggest me how to avoid this duplicate entries,
with out using distinct.
Query:
select p.date,p.coacode,p.type,p.crdr
Hi, sorry i tried to help but i hardly understand the use of join in your query
since the joined table is not used anywhere.
From: Trimurthy
To: mysql@lists.mysql.com
Sent: Thursday, 4 April 2013, 14:21
Subject: Join query returning duplicate entries
Hi
- Original Message -
> From: "Lucky Wijaya"
> To: mysql@lists.mysql.com
> Sent: Thursday, 4 April, 2013 10:51:50 AM
> Subject: Re: Join query returning duplicate entries
>
> Hi, sorry i tried to help but i hardly understand the use of join in
> your quer
2013/2/14 Robert Citek
>
>
> According to the client, nothing is writing to the slave and
> everything is being logged at the master. I have not had the
> opportunity to independently verified any of this, yet. I do know
> that the slave is not in read-only mode, but rather "we promise not to
>
Agreed. Will do that along with several other possible changes. But
for the moment, I'm still gathering information and coming up with
plausible models.
Will also be turning on general mysql logging on both Master and
Slave, at least briefly, to see what statements are being run on both.
Regard
int-in-time recovery from
-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
MASTER_LOG_POS=240814775;
And if you're using the right IP, there's no reason to have duplicate
entries unless someone is writing directly into the slave.
Manuel.
Yes. Except for a handful of static MyISAM tables. But the tables
that are experiencing the issues are all InnoDB and large (a dozen or
so fields, but lots of records.)
Regards,
- Robert
On Thu, Feb 14, 2013 at 5:59 PM, Singer Wang wrote:
> Are you using all InnoDB?
>
> S
--
MySQL General Ma
> read only mode?
> If you're starting replication using the values provided by --master-data=2
> (which should be something like):
>
> -- Position to start replication or point-in-time recovery from
>
> -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
> MAS
On Thu, Feb 14, 2013 at 5:46 PM, Rick James wrote:
>> Is it in read only mode?
> Furthermore, are all users logging in as non-SUPER users? Note: root
> bypasses the readonly flag!
No. The user that is commonly used does have Super privileges. I am
not sure why, but it does.
Regards,
- Rober
> --master-data=2
> > (which should be something like):
> >
> > -- Position to start replication or point-in-time recovery from
> >
> > -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000974',
> > MASTER_LOG_POS=240814775;
> >
> > And if yo
ysql
> Subject: Re: slave replication with lots of 'duplicate entry' errors
>
> 2013/2/13 Robert Citek
>
> > On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek
>
> > wrote:
> > > Any other possibilities? Do other scenarios become likely if there
> &
ursday, February 14, 2013 2:59 PM
> To: Rick James
> Cc: mysql
> Subject: Re: slave replication with lots of 'duplicate entry' errors
>
> On Thu, Feb 14, 2013 at 5:46 PM, Rick James
> wrote:
> >> Is it in read only mode?
> > Furthermore, are all users logg
On Wed, Feb 13, 2013 at 8:59 AM, Robert Citek wrote:
> Any other possibilities? Do other scenarios become likely if there
> are two or more tables?
>
> Of those, which are the most likely?
[from off-list responder]:
> Other possibility: The replication is reading from master not from the point
> -Original Message-
> From: Gary Aitken [mailto:my...@dreamchaser.org]
> Sent: Thursday, June 14, 2012 2:58 PM
>
> I can get the table loaded by specifying REPLACE INTO TABLE, but that
still
> leaves me with not knowing where the duplicate records are.
To find duplica
;>>> 2012/04/03 18:18 +0100, Tompkins Neil >>>>
Before sending the table definition, and queries etc, can anyone advise why
my query with four INNER JOIN might be give me back duplicate results e.g
100,UK,12121
100,UK,12121
Basically the qu
Hi
Before sending the table definition, and queries etc, can anyone advise why
my query with four INNER JOIN might be give me back duplicate results e.g
100,UK,12121
100,UK,12121
Basically the query the statement AND
(hotel_facilities.hotelfacilitytype_id = 47 OR
Hello everyone, I would like to ask for idea and help on how to achieve my
concern. Below is my SQL statement. Im joining 2 tables to get my results.
Here's the sample results of what im getting.
Name | Desc | Issue | ATime | Back | TotalTime | Ack | Res
123 | test | error | 2011-10-18 17:09:26
- Original Message -
> From: "Hank"
>
> I'm trying to rebuild an index after disabling all keys using
> myisamchk and adding all 144 million records, so there is no current index on
> the
> table.
Ahhh... I didn't realise that.
> But in order to create the index, mysql has to do a fu
>
> > Exactly - I can't create an index on the table until I remove the
> > duplicate records.
>
> I was under the impression you were seeing this during a myisamchk run -
> which indicates you should *already* have a key on that field. Or am I
> interpreting that
- Original Message -
> From: "Hank"
>
> Exactly - I can't create an index on the table until I remove the
> duplicate records.
I was under the impression you were seeing this during a myisamchk run - which
indicates you should *already* have a key on that
On Mon, Sep 19, 2011 at 7:19 AM, Johan De Meersman wrote:
> - Original Message -
> > From: "Hank"
> >
> > While running a -rq on a large table, I got the following error:
> >
> > myisamchk: warning: Duplicate key for record at 54381140 agains
- Original Message -
> From: "Hank"
>
> While running a -rq on a large table, I got the following error:
>
> myisamchk: warning: Duplicate key for record at 54381140 against
> record at 54380810
>
> How do I find which records are duplicated (wit
While running a -rq on a large table, I got the following error:
myisamchk: warning: Duplicate key for record at 54381140 against
record at 54380810
How do I find which records are duplicated (without doing the typical
self-join or "having cnt(*)>1" query)? This table has 144
> Am 09.09.2011 11:09, schrieb Dave Dyer:
> > Is there a halfway house between a single database and a full
> > master-slave setup?
> >
> > I have a database with one "piggish" table, and I'd like to direct
> > queries that search the pig to a dupli
Am 09.09.2011 11:09, schrieb Dave Dyer:
> Is there a halfway house between a single database and a full master-slave
> setup?
>
> I have a database with one "piggish" table, and I'd like to direct queries
> that search the pig to a duplicate database, where it w
Is there a halfway house between a single database and a full master-slave
setup?
I have a database with one "piggish" table, and I'd like to direct queries that
search the pig to a duplicate database, where it won't affect all the routine
traffic.
I could definitely d
g count = 1;
>
> On May 9, 2011, at 5:45 PM, abhishek jain wrote:
>
>> hi,
>> If we have a following mysql table:
>> Name - ids
>> A 1
>> B 1
>> C 2
>> D 3
>>
>> I want to remove all duplicate occuran
SELECT * from group by id having count = 1;
On May 9, 2011, at 5:45 PM, abhishek jain wrote:
> hi,
> If we have a following mysql table:
> Name - ids
> A 1
> B 1
> C 2
> D 3
>
> I want to remove all duplicate occurances and have
hi,
If we have a following mysql table:
Name - ids
A 1
B 1
C 2
D 3
I want to remove all duplicate occurances and have a result like
Name - ids
C 2
D 3
how can i do that with a query in mysql
Pl. help asap
--
Thanks and kind Regards
Corrupted table and / or index.
A number of reasons could cause this issue:
http://dev.mysql.com/doc/refman/5.1/en/corrupted-myisam-tables.html
Regards,
m
"Adarsh Sharma" pisze:
> Thanks , but there is no trigger on tables.
>
> Even I solved the problem after googling a link but cannot unders
INE=MyISAM AUTO_INCREMENT=31592 DEFAULT CHARSET=latin1
Today don't know why below error occurs when i am going insert some
data in it :
mysql> insert into login(user_id,log_status) values(2,1);
ERROR 1062 (23000): Duplicate entry '31592' for key 'PRIMARY'
I check the lat
`user_id` (`user_id`)
) ENGINE=MyISAM AUTO_INCREMENT=31592 DEFAULT CHARSET=latin1
Today don't know why below error occurs when i am going insert some
data in it :
mysql> insert into login(user_id,log_status) values(2,1);
ERROR 1062 (23000): Duplicate entry '31592' for key
.
- michael dykman
On Tue, Nov 9, 2010 at 3:36 PM, Ilham Firdaus wrote:
> Dear friends.
>
> Anybody would be so nice to explain about meaning of this error message:
> "
> Duplicate entry '2' for key 1
> :.
> It comes if we visit this:
> http://www.otekno.biz/kn/cod
Dear friends.
Anybody would be so nice to explain about meaning of this error message:
"
Duplicate entry '2' for key 1
:.
It comes if we visit this:
http://www.otekno.biz/kn/code/functions.php?task=sync
Thank you very much in advance.
--
Enjoy our free facilities: http:/
>-Original Message-
>From: Nunzio Daveri [mailto:nunziodav...@yahoo.com]
>Sent: Friday, August 27, 2010 10:19 AM
>To: mysql@lists.mysql.com
>Subject: How To Duplicate Number of Hits from Prod Sever to NEW QA server?
>
>Hello, I have been asked to "replay" the
quot; the log against the 5.5 server so as to "duplicate" real time
traffic and not just replay the logs? Is there a tool or a shell script? I
know there are built in "benchmarking" tools but I am trying to tell mgmt that
5.1.4x was lets say 60% percentage busy (cpu/mem/io)
se!
>
> let me know how you go - hope you are keeping well!
>
> ray
>
> At 03:17 PM 27/03/2010, Voytek Eymont wrote:
>
>> I have Postfix virtual mailboxes in MySQL table like below:
>>
>>
>> I'd like to duplicate all records whilst MODIFYING two fi
you are keeping well!
ray
At 03:17 PM 27/03/2010, Voytek Eymont wrote:
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.
> Voytek Eymont wrote:
> Are you hoping to do all that you want - copy rows, update rows and
> create new rows - in a single SQL statement? Because if that's what you
> want, I don't think it's possible. Unless someone has come up with some
> new tricks, you can't insert a new record and update
Voytek Eymont wrote:
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.tld/usern...@domain.tld/'
add
I have Postfix virtual mailboxes in MySQL table like below:
I'd like to duplicate all records whilst MODIFYING two fields like so:
current record has format like:
user 'usern...@domain.tld'
maildir 'domain.tld/usern...@domain.tld/'
add new record that has:
user
However, after running for a few hours, the query fails with the
following error:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'new_order-248642-order_line-13126643' for key
'group_key'
How is this possible? There were no concurre
ils with the
following error:
com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException:
Duplicate entry 'new_order-248642-order_line-13126643' for key
'group_key'
How is this possible? There were no concurrently running queries
inserting into 'graph'. I'm u
27;re both from the same author, and both in chapter 1 of the book. It should
not return ID 4, because that's in a different chapter.
Note that J. and John have to be considered the same. For my purposes, it's
sufficient to look at the first word, Smith, and consi
chapter 1 of the book. It should
not return ID 4, because that's in a different chapter.
Note that J. and John have to be considered the same. For my purposes, it's
sufficient to look at the first word, Smith, and consider that a duplicate.
++--+-+
| ID | Au
2009/12/13 Victor Subervi :
[...]
> Please advise.
review your sql: you are inserting into
tem126072414516
and selecting from
tem126072385457
( Asterisk in Pinter Tibor's mail means "bold" )
Greetings,
Mattia Merzi.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
Gods. What is this, a creche ?
*plonk*
On Sun, Dec 13, 2009 at 6:44 PM, Victor Subervi wrote:
> On Sun, Dec 13, 2009 at 12:21 PM, Pinter Tibor wrote:
>
> > Victor Subervi wrote:
> >
> >> Hi;
> >>
> >> mysql> insert into *tem126072414516* (ProdID, Quantity) values ("2",
> "2");
> >> mysql> sele
On Sun, Dec 13, 2009 at 12:21 PM, Pinter Tibor wrote:
> Victor Subervi wrote:
>
>> Hi;
>>
>> mysql> insert into *tem126072414516* (ProdID, Quantity) values ("2", "2");
>> mysql> select * from *tem126072385457*;
>>
>
>
mysql> insert into *tem126072414516* (ProdID, Quantity) values ("2", "2");
ERRO
Victor Subervi wrote:
Hi;
mysql> insert into *tem126072414516* (ProdID, Quantity) values ("2", "2");
mysql> select * from *tem126072385457*;
t
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Hi;
mysql> insert into tem126072414516 (ProdID, Quantity) values ("2", "2");
ERROR 1062 (23000): Duplicate entry '2' for key 2
mysql> select * from tem126072385457;
Empty set (0.00 s
Does anyone know if I can add a hint SQL_BUFFER_RESULT to INSERT .. SELECT ON
DUPLICATE
ex..
INSERT INTO foo
SELECT SQL_BUFFER_RESULT* FROM bar
ON DUPLICATE KEY UPDATE foo.X=..
Both my tables foo and bar are InnoDB; but the idea is to release the lock on
bar as soon as possible by moving the
A very first thought I got is disable the constraint before import and
re-enable after that.
One way could be to set the foreign key checks to false or alter the
constraint and remove the 'cascade delete' part.
It's just a quick brain storm, please verify the goodness of it, I still
need to get my
Hello,
we have two tables associated with a foreign key constraint.
Table A with the primary key and table B with an "on delete cascade" constraint.
We want to delete datasets in Table B if the related dataset in Table A is
deleted - that works.
Now the Problem:
There is a weekly import defined
Ah... Yes. Good point. I like this because I was planning on keeping
the output somewhere for a while. (In case we need an "accounting" at
some point) So it will be easy enough to dump what's being deleted to
the screen while we loop over our candidates.
Thanks!
On Tue, Jul 14, 2009 at 10:16 AM,
That's assuming that there is a unique identifier field, like an auto
increment field. Although that could be added after the fact. Also,
you need to run the query multiple times until it returns no affected
records. So if there are 4 copies of a record, it would need to be run
3 times to get rid o
You can combine the two queries you have in option 3 (you'll need to
change field names, but you should get the idea), something like this:
DELETE table1 FROM table1, (SELECT MAX(id) AS dupid, COUNT(id) AS
dupcnt FROM table1 WHERE field1 IS NOT NULL GROUP BY link_id HAVING
dupcnt>1) AS dups W
@lists.mysql.com
Subject: Removing Duplicate Records
In our database we have an Organizations table and a Contacts table,
and a linking table that associates Contacts with Organizations.
Occassionally we manually add to the linking table with information
gleaned from outside data sources. This is common
duplicate linkages, but it's FAR from
an everyday occurance.
I have three options for dealing with the resulting duplicates and I
would appreciate some advice on which option might be best.
1. Attack the horrific spaghetti code that determines the Org and
Contact ids and then does the manua
Jason Novotny >wrote:
>
> > Hi,
> >
> > I'm trying to import a dumpfile like so:
> >
> > cat aac.sql | mysql -u root AAC
> >
> > It all runs fine until I get something like:
> >
> > ERROR 1061 (42000) at line 5671: Duplicate key name
>
> It all runs fine until I get something like:
>
> ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'
>
>
> Is there a way I can tell it to ignore or replace the key?
>
> Thanks, Jason
>
> --
> MySQL General Mailing List
> For
Hi,
I'm trying to import a dumpfile like so:
cat aac.sql | mysql -u root AAC
It all runs fine until I get something like:
ERROR 1061 (42000) at line 5671: Duplicate key name 'FK_mediaZip_to_zipSet'
Is there a way I can tell it to ignore or replace the key?
Thanks,
Thank you all I solved the problem
--- On Mon, 3/2/09, Darryle Steplight wrote:
From: Darryle Steplight
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com
Cc: mysql@lists.mysql.com, g...@primeexalia.com
Date: Monday, March 2, 2009, 2:32 PM
Are you trying to do an Insert On Duplicate Key? Do ou want to insert
a new row if it doesn't already exist or update one if it does?
On Mon, Mar 2, 2009 at 4:09 PM, sam rumaizan wrote:
> Are you talking about Length/Values1
>
>
>
> --- On Mon, 3/2/09, Gary Smith wrote:
Are you talking about Length/Values1
--- On Mon, 3/2/09, Gary Smith wrote:
From: Gary Smith
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com, mysql@lists.mysql.com
Date: Monday, March 2, 2009, 1:58 PM
Easy. Ensure that all in the primary ke
How do I modify the column to add value? Can I do it with phpmyadmin?
--- On Mon, 3/2/09, Gary Smith wrote:
From: Gary Smith
Subject: Re: Error: Duplicate entry '0' for key 'PRIMARY'
To: samc...@yahoo.com, mysql@lists.mysql.com
Date: Monday, March 2, 2009, 1:58 PM
Easy
Error: Duplicate entry '0' for key 'PRIMARY'
how can i fix it ?
The reporting of two rows thing is to do with how MySQL handles
INSERT ... ON DUPLICATE KEY UPDATE ... statements; it will report 1 row
if it inserts, and 2 rows if it finds a duplicate key and has to update
as well.
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
Just after the
Michael Dykman wrote:
It
worked fine as you wrote it on my v5.0.45, although it reported 2 rows
affected on each subsequent run of the insert statement. I thought
this odd as I only ran the same statement repeatedly leaving me with
one row ever, but the value updated just fine.
I noticed
-+-+-+---+
> | name | varchar(40) | NO | PRI | | |
> | value | varchar(255) | NO | | | |
> +---+--+--+-+-+---+
>
> Query run on both systems:
> INSERT INTO $TABLE SET N
lt | Extra |
+---+--+--+-+-+---+
| name | varchar(40) | NO | PRI | | |
| value | varchar(255) | NO | | | |
+---+--+--+-+-+---+
Query run on both systems:
INSERT INTO $TABLE SET NAME='atest', value=now() ON DUPLICATE
| name | varchar(40) | NO | PRI | | |
| value | varchar(255) | NO | | | |
+---+--+--+-+-+---+
Query run on both systems:
INSERT INTO $TABLE SET NAME='atest', value=now() ON DUPLICATE K
@lists.mysql.com
Subject: Re: How to remove the duplicate values in my table!
On Nov 19, 2008, at 3:24 AM, jean claude babin wrote:
> Hi,
>
> I found the bug in my servlet ,when I run my application it enter
> one record
> to the database without duplicate values.Now I want to clean
On Nov 19, 2008, at 3:24 AM, jean claude babin wrote:
Hi,
I found the bug in my servlet ,when I run my application it enter
one record
to the database without duplicate values.Now I want to clean my
table by
removing all duplicate rows .Any thoughts?
I assume you have a unique record
Hi,
I found the bug in my servlet ,when I run my application it enter one record
to the database without duplicate values.Now I want to clean my table by
removing all duplicate rows .Any thoughts?
ql5.0 server.When I run my
application, I got duplicate values everytime I enter a record.
my java class is :
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Statement;
public class HandlereplyModel {
public String insert(String fname, String lname, String Email, String
Hello,
I'm using Eclipse 3.3 and I use a model class HandlereplyModel.java to
insert values to a mysql database using mysql5.0 server.When I run my
application, I got duplicate values everytime I enter a record.
my java class is :
import java.sql.Connection;
import java.sql.DriverMa
x12 select * from tx1 left join tx2 on tx1.name1=tx2.name1;
Error Code : 1060
Duplicate column name 'name1'
I would have thought MySQL would have named the second tx12.name1 as
"name1_1" so the column name is unique. But instead I get an error.
Is there a way around this b
| PRI | | |
> | AMOUNTINPENCE | bigint(20) | YES | | NULL| |
> +---++--+-+-+---+
>
> ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
>
> I need to remove duplicate entries that occured at a specific time in
> ACC
>
>on A1.ID = U1.ID
>where A1.ACTIONDATE like '2008-08-01 02:00%'
>and U1.ID is NULL
>) as D
> );
>
> Thanks for the pointers ;-)
>
>
>
>
> -Original Message-
> From: Magnus Smith [mailto:[EMAIL PROTECTED]
> Sen
10:35
To: Ananda Kumar
Cc: mysql@lists.mysql.com
Subject: RE: removing duplicate entries
Yes I can see you are correct. I tried setting up a little test case
myself.
CREATE TABLE ACCOUNTACTION (
ID INT NOT NULL PRIMARY KEY,
ACTIONDATE DATETIME,
ACCOUNT_ID INT NOT NULL
);
7;),
> ('010', '2008-08-01 03:00:00', '105'),
> ('011', '2008-08-01 02:00:00', '106');
>
> INSERT INTO ACCOUNTPAYMENTACTION (ID, AMOUNT)
> VALUES('001', '1000'),
> ('002', '1000'),
> ('
7;2008-08-01 02:00:00', '105'),
('009', '2008-08-01 03:00:00', '104'),
('010', '2008-08-01 03:00:00', '105'),
('011', '2008-08-01 02:00:00', '106');
INSERT INTO ACCOUNTPAYMENTACTION (ID, AMOUNT)
VAL
ows returned, are the one you want to keep are indeed
duplicates
On 8/6/08, Magnus Smith <[EMAIL PROTECTED]> wrote:
>
> When I try the first suggestion (i) then I get all the 1682 duplicate
> rows. The thing is that I need to keep the originals which are the ones
> with the lo
When I try the first suggestion (i) then I get all the 1682 duplicate
rows. The thing is that I need to keep the originals which are the ones
with the lowest ACCOUNTACTION.ID value.
The second suggestion (ii) gives me 563 rows that are the duplicates
with the lowest ACCOUNTACTION.ID which are
-+-+-+---+
> | ID| bigint(20) | NO | PRI | | |
> | AMOUNTINPENCE | bigint(20) | YES | | NULL| |
> +---++--+-+-+---+
>
> ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
>
> I need to rem
| bigint(20) | NO | PRI | | |
| AMOUNTINPENCE | bigint(20) | YES | | NULL| |
+---++--+-+-+---+
ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION
I need to remove duplicate entries that occured at a specific time in
1 - 100 of 699 matches
Mail list logo