Some further tricks will (probably) lead you to your goal. I suppose you use duplicated() or something similar to get an array of locations of the duplicated values:

pos.dup <- whcih(duplicated(value))

then do
diff.pos.dup <- diff(pos.dup)

and you get the indices to delete:

pos.delete   <- order[diff.pos.dup[which(diff.pos.dup==1)]]


I leave some tweaking to you as you perhaps have to adjust some indices slightly by adding or substracting 1 (I am never exactly sure how this diff() function turns out).


HTH
Jannis
Moohwan Kim schrieb:
Dear R family,

Suppose I have two series.

order value
1  0.52
2  0.23
3  0.43
4  0.21
5  0.32
6  0.32
7  0.32
8  0.32
9  0.32
10 0.12
11 0.46
12 0.09
13 0.32
14 0.25

For these two series, I figured out the way to detect the locations of
duplicate values.
The next thing to do is remove the repeated values except for a value
that would not be next to each other.
In other words, while keeping the 13th value, I want to remove
observations from 6th to 9th.
That is my end goal.

Could you help me reach the goal?

best
moohwan

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to