How do I remove duplicate items from an array in Perl?
I have an array in Perl:
my @my_array = ("one","two","three","two","three");
How do I remove the duplicates from the array?
perl arrays unique duplicates
add a comment |
I have an array in Perl:
my @my_array = ("one","two","three","two","three");
How do I remove the duplicates from the array?
perl arrays unique duplicates
add a comment |
I have an array in Perl:
my @my_array = ("one","two","three","two","three");
How do I remove the duplicates from the array?
perl arrays unique duplicates
I have an array in Perl:
my @my_array = ("one","two","three","two","three");
How do I remove the duplicates from the array?
perl arrays unique duplicates
perl arrays unique duplicates
edited Dec 25 '14 at 19:57
Сухой27
46.4k65296
46.4k65296
asked Aug 11 '08 at 10:04
DavidDavid
8,492247096
8,492247096
add a comment |
add a comment |
11 Answers
11
active
oldest
votes
You can do something like this as demonstrated in perlfaq4:
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filteredn";
Outputs:
one two three
If you want to use a module, try the uniq
function from List::MoreUtils
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
It's amy
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.
– ephemient
Jan 18 '10 at 17:51
2
@ephemient yes, but if you were to add sorting in this function then it would trump$::a
and$::b
, wouldn't it?
– vol7ron
Feb 21 '12 at 16:45
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.
– ikegami
Nov 6 '12 at 18:51
|
show 2 more comments
The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:
% perldoc -q duplicate
The answer, copy and pasted from the output of the command above, appears below:
Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod
How can I remove duplicate elements from a list or array?
(contributed by brian d foy)
Use a hash. When you think the words "unique" or "duplicated", think
"hash keys".
If you don't care about the order of the elements, you could just
create the hash then extract the keys. It's not important how you
create that hash: just that you use "keys" to get the unique elements.
my %hash = map { $_, 1 } @array;
# or a hash slice: @hash{ @array } = ();
# or a foreach: $hash{$_} = 1 foreach ( @array );
my @unique = keys %hash;
If you want to use a module, try the "uniq" function from
"List::MoreUtils". In list context it returns the unique elements,
preserving their order in the list. In scalar context, it returns the
number of unique elements.
use List::MoreUtils qw(uniq);
my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7
You can also go through each element and skip the ones you've seen
before. Use a hash to keep track. The first time the loop sees an
element, that element has no key in %Seen. The "next" statement creates
the key and immediately uses its value, which is "undef", so the loop
continues to the "push" and increments the value for that key. The next
time the loop sees that same element, its key exists in the hash and
the value for that key is true (since it's not 0 or "undef"), so the
next skips that iteration and the loop goes to the next element.
my @unique = ();
my %seen = ();
foreach my $elem ( @array )
{
next if $seen{ $elem }++;
push @unique, $elem;
}
You can write this more briefly using a grep, which does the same
thing.
my %seen = ();
my @unique = grep { ! $seen{ $_ }++ } @array;
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
add a comment |
Install List::MoreUtils from CPAN
Then in your code:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
my @dup_list = qw(1 1 1 2 3 4 4);
my @uniq_list = uniq(@dup_list);
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
@Ranguard:@dup_list
should be inside theuniq
call, not@dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
add a comment |
My usual way of doing this is:
my %unique = ();
foreach my $item (@myarray)
{
$unique{$item} ++;
}
my @myuniquearray = keys %unique;
If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead offoreach
loop:@unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
add a comment |
Can be done with a simple Perl one liner.
my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.
The PFM block does this:
Data in @in is fed into MAP. MAP builds an anonymous hash. Keys are extracted from the hash and feed into @out
add a comment |
The variable @array is the list with duplicate elements
%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
add a comment |
That last one was pretty good. I'd just tweak it a bit:
my @arr;
my @uniqarr;
foreach my $var ( @arr ){
if ( ! grep( /$var/, @uniqarr ) ){
push( @uniqarr, $var );
}
}
I think this is probably the most readable way to do it.
1
More independent..
– laki
Dec 26 '13 at 3:52
add a comment |
Method 1: Use a hash
Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.
my @unique = keys {map {$_ => 1} @array};
Method 2: Extension of method 1 for reusability
Better to make a subroutine if we are supposed to use this functionality multiple times in our code.
sub get_unique {
my %seen;
grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);
Method 3: Use module List::MoreUtils
use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
add a comment |
Try this, seems the uniq function needs a sorted list to work properly.
use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filteredn";
my @sorted = sort @teststrings;
print "sort: @sortedn";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfilteredn";
add a comment |
Using concept of unique hash keys :
my @array = ("a","b","c","b","a","d","c","a","d");
my %hash = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","n";
Output:
a c b d
add a comment |
Previous answers pretty much summarize the possible ways of accomplishing this task.
However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.
my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;
Note that the previously suggested grep !$seen{$_}++ ...
increments $seen{$_}
before negating, so the increment occurs regardless of whether it has already been %seen
or not. The above, however, short-circuits when $record{$_}
is true, leaving what's been heard once 'off the %record
'.
You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:
...
grep !(exists $record{$_} || undef $record{$_}), @record;
That, however, might lead to some confusion.
And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:
...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
add a comment |
protected by Community♦ Mar 13 '12 at 9:50
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
11 Answers
11
active
oldest
votes
11 Answers
11
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can do something like this as demonstrated in perlfaq4:
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filteredn";
Outputs:
one two three
If you want to use a module, try the uniq
function from List::MoreUtils
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
It's amy
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.
– ephemient
Jan 18 '10 at 17:51
2
@ephemient yes, but if you were to add sorting in this function then it would trump$::a
and$::b
, wouldn't it?
– vol7ron
Feb 21 '12 at 16:45
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.
– ikegami
Nov 6 '12 at 18:51
|
show 2 more comments
You can do something like this as demonstrated in perlfaq4:
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filteredn";
Outputs:
one two three
If you want to use a module, try the uniq
function from List::MoreUtils
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
It's amy
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.
– ephemient
Jan 18 '10 at 17:51
2
@ephemient yes, but if you were to add sorting in this function then it would trump$::a
and$::b
, wouldn't it?
– vol7ron
Feb 21 '12 at 16:45
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.
– ikegami
Nov 6 '12 at 18:51
|
show 2 more comments
You can do something like this as demonstrated in perlfaq4:
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filteredn";
Outputs:
one two three
If you want to use a module, try the uniq
function from List::MoreUtils
You can do something like this as demonstrated in perlfaq4:
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @array = qw(one two three two three);
my @filtered = uniq(@array);
print "@filteredn";
Outputs:
one two three
If you want to use a module, try the uniq
function from List::MoreUtils
edited Jun 21 '14 at 1:39
Miller
32.8k42952
32.8k42952
answered Aug 11 '08 at 10:16
Greg HewgillGreg Hewgill
677k14710181171
677k14710181171
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
It's amy
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.
– ephemient
Jan 18 '10 at 17:51
2
@ephemient yes, but if you were to add sorting in this function then it would trump$::a
and$::b
, wouldn't it?
– vol7ron
Feb 21 '12 at 16:45
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.
– ikegami
Nov 6 '12 at 18:51
|
show 2 more comments
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
It's amy
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.
– ephemient
Jan 18 '10 at 17:51
2
@ephemient yes, but if you were to add sorting in this function then it would trump$::a
and$::b
, wouldn't it?
– vol7ron
Feb 21 '12 at 16:45
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.
– ikegami
Nov 6 '12 at 18:51
27
27
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
please don't use $a or $b in examples as they are the magic globals of sort()
– szabgab
Sep 17 '08 at 7:50
2
2
It's a
my
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.– ephemient
Jan 18 '10 at 17:51
It's a
my
lexical in this scope, so it's fine. That being said, possibly a more descriptive variable name could be chosen.– ephemient
Jan 18 '10 at 17:51
2
2
@ephemient yes, but if you were to add sorting in this function then it would trump
$::a
and $::b
, wouldn't it?– vol7ron
Feb 21 '12 at 16:45
@ephemient yes, but if you were to add sorting in this function then it would trump
$::a
and $::b
, wouldn't it?– vol7ron
Feb 21 '12 at 16:45
5
5
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
@BrianVandenberg Welcome to the world of 1987 - when this was created - and almost 100% backword compbaility for perl - so it cannot be eliminated.
– szabgab
Jun 25 '12 at 8:19
17
17
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.– ikegami
Nov 6 '12 at 18:51
sub uniq { my %seen; grep !$seen{$_}++, @_ }
is a better implementation since it preserves order at no cost. Or even better, use the one from List::MoreUtils.– ikegami
Nov 6 '12 at 18:51
|
show 2 more comments
The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:
% perldoc -q duplicate
The answer, copy and pasted from the output of the command above, appears below:
Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod
How can I remove duplicate elements from a list or array?
(contributed by brian d foy)
Use a hash. When you think the words "unique" or "duplicated", think
"hash keys".
If you don't care about the order of the elements, you could just
create the hash then extract the keys. It's not important how you
create that hash: just that you use "keys" to get the unique elements.
my %hash = map { $_, 1 } @array;
# or a hash slice: @hash{ @array } = ();
# or a foreach: $hash{$_} = 1 foreach ( @array );
my @unique = keys %hash;
If you want to use a module, try the "uniq" function from
"List::MoreUtils". In list context it returns the unique elements,
preserving their order in the list. In scalar context, it returns the
number of unique elements.
use List::MoreUtils qw(uniq);
my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7
You can also go through each element and skip the ones you've seen
before. Use a hash to keep track. The first time the loop sees an
element, that element has no key in %Seen. The "next" statement creates
the key and immediately uses its value, which is "undef", so the loop
continues to the "push" and increments the value for that key. The next
time the loop sees that same element, its key exists in the hash and
the value for that key is true (since it's not 0 or "undef"), so the
next skips that iteration and the loop goes to the next element.
my @unique = ();
my %seen = ();
foreach my $elem ( @array )
{
next if $seen{ $elem }++;
push @unique, $elem;
}
You can write this more briefly using a grep, which does the same
thing.
my %seen = ();
my @unique = grep { ! $seen{ $_ }++ } @array;
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
add a comment |
The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:
% perldoc -q duplicate
The answer, copy and pasted from the output of the command above, appears below:
Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod
How can I remove duplicate elements from a list or array?
(contributed by brian d foy)
Use a hash. When you think the words "unique" or "duplicated", think
"hash keys".
If you don't care about the order of the elements, you could just
create the hash then extract the keys. It's not important how you
create that hash: just that you use "keys" to get the unique elements.
my %hash = map { $_, 1 } @array;
# or a hash slice: @hash{ @array } = ();
# or a foreach: $hash{$_} = 1 foreach ( @array );
my @unique = keys %hash;
If you want to use a module, try the "uniq" function from
"List::MoreUtils". In list context it returns the unique elements,
preserving their order in the list. In scalar context, it returns the
number of unique elements.
use List::MoreUtils qw(uniq);
my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7
You can also go through each element and skip the ones you've seen
before. Use a hash to keep track. The first time the loop sees an
element, that element has no key in %Seen. The "next" statement creates
the key and immediately uses its value, which is "undef", so the loop
continues to the "push" and increments the value for that key. The next
time the loop sees that same element, its key exists in the hash and
the value for that key is true (since it's not 0 or "undef"), so the
next skips that iteration and the loop goes to the next element.
my @unique = ();
my %seen = ();
foreach my $elem ( @array )
{
next if $seen{ $elem }++;
push @unique, $elem;
}
You can write this more briefly using a grep, which does the same
thing.
my %seen = ();
my @unique = grep { ! $seen{ $_ }++ } @array;
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
add a comment |
The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:
% perldoc -q duplicate
The answer, copy and pasted from the output of the command above, appears below:
Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod
How can I remove duplicate elements from a list or array?
(contributed by brian d foy)
Use a hash. When you think the words "unique" or "duplicated", think
"hash keys".
If you don't care about the order of the elements, you could just
create the hash then extract the keys. It's not important how you
create that hash: just that you use "keys" to get the unique elements.
my %hash = map { $_, 1 } @array;
# or a hash slice: @hash{ @array } = ();
# or a foreach: $hash{$_} = 1 foreach ( @array );
my @unique = keys %hash;
If you want to use a module, try the "uniq" function from
"List::MoreUtils". In list context it returns the unique elements,
preserving their order in the list. In scalar context, it returns the
number of unique elements.
use List::MoreUtils qw(uniq);
my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7
You can also go through each element and skip the ones you've seen
before. Use a hash to keep track. The first time the loop sees an
element, that element has no key in %Seen. The "next" statement creates
the key and immediately uses its value, which is "undef", so the loop
continues to the "push" and increments the value for that key. The next
time the loop sees that same element, its key exists in the hash and
the value for that key is true (since it's not 0 or "undef"), so the
next skips that iteration and the loop goes to the next element.
my @unique = ();
my %seen = ();
foreach my $elem ( @array )
{
next if $seen{ $elem }++;
push @unique, $elem;
}
You can write this more briefly using a grep, which does the same
thing.
my %seen = ();
my @unique = grep { ! $seen{ $_ }++ } @array;
The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked:
% perldoc -q duplicate
The answer, copy and pasted from the output of the command above, appears below:
Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod
How can I remove duplicate elements from a list or array?
(contributed by brian d foy)
Use a hash. When you think the words "unique" or "duplicated", think
"hash keys".
If you don't care about the order of the elements, you could just
create the hash then extract the keys. It's not important how you
create that hash: just that you use "keys" to get the unique elements.
my %hash = map { $_, 1 } @array;
# or a hash slice: @hash{ @array } = ();
# or a foreach: $hash{$_} = 1 foreach ( @array );
my @unique = keys %hash;
If you want to use a module, try the "uniq" function from
"List::MoreUtils". In list context it returns the unique elements,
preserving their order in the list. In scalar context, it returns the
number of unique elements.
use List::MoreUtils qw(uniq);
my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7
my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7
You can also go through each element and skip the ones you've seen
before. Use a hash to keep track. The first time the loop sees an
element, that element has no key in %Seen. The "next" statement creates
the key and immediately uses its value, which is "undef", so the loop
continues to the "push" and increments the value for that key. The next
time the loop sees that same element, its key exists in the hash and
the value for that key is true (since it's not 0 or "undef"), so the
next skips that iteration and the loop goes to the next element.
my @unique = ();
my %seen = ();
foreach my $elem ( @array )
{
next if $seen{ $elem }++;
push @unique, $elem;
}
You can write this more briefly using a grep, which does the same
thing.
my %seen = ();
my @unique = grep { ! $seen{ $_ }++ } @array;
edited Aug 11 '08 at 14:30
answered Aug 11 '08 at 14:27
John SiracusaJohn Siracusa
12.8k53753
12.8k53753
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
add a comment |
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
perldoc.perl.org/…
– szabgab
Sep 17 '08 at 7:48
16
16
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
John iz in mah anzers stealing mah rep!
– brian d foy
Oct 9 '08 at 23:41
5
5
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
I think you should get bonus points for actually looking the question up.
– Brad Gilbert
Oct 24 '08 at 15:14
2
2
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
I like that the best answer is 95% copy-paste and 3 sentences of OC. To be perfectly clear, this is the best answer; I just find that fact amusing.
– Parthian Shot
Jul 21 '14 at 18:23
add a comment |
Install List::MoreUtils from CPAN
Then in your code:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
my @dup_list = qw(1 1 1 2 3 4 4);
my @uniq_list = uniq(@dup_list);
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
@Ranguard:@dup_list
should be inside theuniq
call, not@dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
add a comment |
Install List::MoreUtils from CPAN
Then in your code:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
my @dup_list = qw(1 1 1 2 3 4 4);
my @uniq_list = uniq(@dup_list);
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
@Ranguard:@dup_list
should be inside theuniq
call, not@dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
add a comment |
Install List::MoreUtils from CPAN
Then in your code:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
my @dup_list = qw(1 1 1 2 3 4 4);
my @uniq_list = uniq(@dup_list);
Install List::MoreUtils from CPAN
Then in your code:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
my @dup_list = qw(1 1 1 2 3 4 4);
my @uniq_list = uniq(@dup_list);
edited May 15 '16 at 14:32
Coding Minds
313
313
answered Aug 31 '08 at 10:01
RanguardRanguard
66143
66143
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
@Ranguard:@dup_list
should be inside theuniq
call, not@dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
add a comment |
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
@Ranguard:@dup_list
should be inside theuniq
call, not@dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
2
2
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
That's the answer! But I can only vote you up once.
– Axeman
Oct 5 '08 at 4:42
4
4
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
The fact that List::MoreUtils is not bundled w/ perl kinda damages the portability of projects using it :( (I for one won't)
– yPhil
Mar 19 '12 at 2:00
3
3
@Ranguard:
@dup_list
should be inside the uniq
call, not @dups
– incutonez
Nov 11 '13 at 14:48
@Ranguard:
@dup_list
should be inside the uniq
call, not @dups
– incutonez
Nov 11 '13 at 14:48
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
@yassinphilip CPAN is one of the things that make Perl as powerful and great as it can be. If you are writing your projects based only on core modules, you're putting a huge limit on your code, along with possibly pourly written code that attempts to do what some modules do much better just to avoid using them. Also, using core modules doesn't guarantee anything, as different Perl versions can add or remove core modules from the distribution, so portability is still depending on that.
– Francisco Zarabozo
Jun 27 '17 at 14:38
add a comment |
My usual way of doing this is:
my %unique = ();
foreach my $item (@myarray)
{
$unique{$item} ++;
}
my @myuniquearray = keys %unique;
If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead offoreach
loop:@unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
add a comment |
My usual way of doing this is:
my %unique = ();
foreach my $item (@myarray)
{
$unique{$item} ++;
}
my @myuniquearray = keys %unique;
If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead offoreach
loop:@unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
add a comment |
My usual way of doing this is:
my %unique = ();
foreach my $item (@myarray)
{
$unique{$item} ++;
}
my @myuniquearray = keys %unique;
If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.
My usual way of doing this is:
my %unique = ();
foreach my $item (@myarray)
{
$unique{$item} ++;
}
my @myuniquearray = keys %unique;
If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list.
edited Jul 20 '14 at 16:35
Chankey Pathak
15k952104
15k952104
answered Aug 11 '08 at 10:18
XetiusXetius
29.7k2477114
29.7k2477114
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead offoreach
loop:@unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
add a comment |
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead offoreach
loop:@unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
2
2
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
This has the downside of not preserving the original order, if you need it.
– Nathan Fellman
Feb 18 '14 at 12:34
It is better to use slices instead of
foreach
loop: @unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
It is better to use slices instead of
foreach
loop: @unique{@myarray}=()
– Onlyjob
Sep 20 '15 at 15:46
add a comment |
Can be done with a simple Perl one liner.
my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.
The PFM block does this:
Data in @in is fed into MAP. MAP builds an anonymous hash. Keys are extracted from the hash and feed into @out
add a comment |
Can be done with a simple Perl one liner.
my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.
The PFM block does this:
Data in @in is fed into MAP. MAP builds an anonymous hash. Keys are extracted from the hash and feed into @out
add a comment |
Can be done with a simple Perl one liner.
my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.
The PFM block does this:
Data in @in is fed into MAP. MAP builds an anonymous hash. Keys are extracted from the hash and feed into @out
Can be done with a simple Perl one liner.
my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data
my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM
print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order.
The PFM block does this:
Data in @in is fed into MAP. MAP builds an anonymous hash. Keys are extracted from the hash and feed into @out
edited Nov 9 '11 at 21:30
answered Nov 9 '11 at 21:23
HawkHawk
44879
44879
add a comment |
add a comment |
The variable @array is the list with duplicate elements
%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
add a comment |
The variable @array is the list with duplicate elements
%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
add a comment |
The variable @array is the list with duplicate elements
%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
The variable @array is the list with duplicate elements
%seen=();
@unique = grep { ! $seen{$_} ++ } @array;
edited Jul 15 '13 at 19:02
jh314
20.5k124869
20.5k124869
answered Oct 23 '10 at 16:18
SreedharSreedhar
6111
6111
add a comment |
add a comment |
That last one was pretty good. I'd just tweak it a bit:
my @arr;
my @uniqarr;
foreach my $var ( @arr ){
if ( ! grep( /$var/, @uniqarr ) ){
push( @uniqarr, $var );
}
}
I think this is probably the most readable way to do it.
1
More independent..
– laki
Dec 26 '13 at 3:52
add a comment |
That last one was pretty good. I'd just tweak it a bit:
my @arr;
my @uniqarr;
foreach my $var ( @arr ){
if ( ! grep( /$var/, @uniqarr ) ){
push( @uniqarr, $var );
}
}
I think this is probably the most readable way to do it.
1
More independent..
– laki
Dec 26 '13 at 3:52
add a comment |
That last one was pretty good. I'd just tweak it a bit:
my @arr;
my @uniqarr;
foreach my $var ( @arr ){
if ( ! grep( /$var/, @uniqarr ) ){
push( @uniqarr, $var );
}
}
I think this is probably the most readable way to do it.
That last one was pretty good. I'd just tweak it a bit:
my @arr;
my @uniqarr;
foreach my $var ( @arr ){
if ( ! grep( /$var/, @uniqarr ) ){
push( @uniqarr, $var );
}
}
I think this is probably the most readable way to do it.
edited Jul 16 '13 at 3:37
jh314
20.5k124869
20.5k124869
answered Jan 23 '09 at 23:35
Jay
1
More independent..
– laki
Dec 26 '13 at 3:52
add a comment |
1
More independent..
– laki
Dec 26 '13 at 3:52
1
1
More independent..
– laki
Dec 26 '13 at 3:52
More independent..
– laki
Dec 26 '13 at 3:52
add a comment |
Method 1: Use a hash
Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.
my @unique = keys {map {$_ => 1} @array};
Method 2: Extension of method 1 for reusability
Better to make a subroutine if we are supposed to use this functionality multiple times in our code.
sub get_unique {
my %seen;
grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);
Method 3: Use module List::MoreUtils
use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
add a comment |
Method 1: Use a hash
Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.
my @unique = keys {map {$_ => 1} @array};
Method 2: Extension of method 1 for reusability
Better to make a subroutine if we are supposed to use this functionality multiple times in our code.
sub get_unique {
my %seen;
grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);
Method 3: Use module List::MoreUtils
use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
add a comment |
Method 1: Use a hash
Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.
my @unique = keys {map {$_ => 1} @array};
Method 2: Extension of method 1 for reusability
Better to make a subroutine if we are supposed to use this functionality multiple times in our code.
sub get_unique {
my %seen;
grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);
Method 3: Use module List::MoreUtils
use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
Method 1: Use a hash
Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array.
my @unique = keys {map {$_ => 1} @array};
Method 2: Extension of method 1 for reusability
Better to make a subroutine if we are supposed to use this functionality multiple times in our code.
sub get_unique {
my %seen;
grep !$seen{$_}++, @_;
}
my @unique = get_unique(@array);
Method 3: Use module List::MoreUtils
use List::MoreUtils qw(uniq);
my @unique = uniq(@array);
answered May 9 '17 at 15:29
Kamal NayanKamal Nayan
1,0001326
1,0001326
add a comment |
add a comment |
Try this, seems the uniq function needs a sorted list to work properly.
use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filteredn";
my @sorted = sort @teststrings;
print "sort: @sortedn";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfilteredn";
add a comment |
Try this, seems the uniq function needs a sorted list to work properly.
use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filteredn";
my @sorted = sort @teststrings;
print "sort: @sortedn";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfilteredn";
add a comment |
Try this, seems the uniq function needs a sorted list to work properly.
use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filteredn";
my @sorted = sort @teststrings;
print "sort: @sortedn";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfilteredn";
Try this, seems the uniq function needs a sorted list to work properly.
use strict;
# Helper function to remove duplicates in a list.
sub uniq {
my %seen;
grep !$seen{$_}++, @_;
}
my @teststrings = ("one", "two", "three", "one");
my @filtered = uniq @teststrings;
print "uniq: @filteredn";
my @sorted = sort @teststrings;
print "sort: @sortedn";
my @sortedfiltered = uniq sort @teststrings;
print "uniq sort : @sortedfilteredn";
answered May 26 '15 at 1:56
saschabeaumontsaschabeaumont
20.1k45381
20.1k45381
add a comment |
add a comment |
Using concept of unique hash keys :
my @array = ("a","b","c","b","a","d","c","a","d");
my %hash = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","n";
Output:
a c b d
add a comment |
Using concept of unique hash keys :
my @array = ("a","b","c","b","a","d","c","a","d");
my %hash = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","n";
Output:
a c b d
add a comment |
Using concept of unique hash keys :
my @array = ("a","b","c","b","a","d","c","a","d");
my %hash = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","n";
Output:
a c b d
Using concept of unique hash keys :
my @array = ("a","b","c","b","a","d","c","a","d");
my %hash = map { $_ => 1 } @array;
my @unique = keys %hash;
print "@unique","n";
Output:
a c b d
answered Mar 30 '17 at 9:47
Sandeep_blackSandeep_black
535613
535613
add a comment |
add a comment |
Previous answers pretty much summarize the possible ways of accomplishing this task.
However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.
my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;
Note that the previously suggested grep !$seen{$_}++ ...
increments $seen{$_}
before negating, so the increment occurs regardless of whether it has already been %seen
or not. The above, however, short-circuits when $record{$_}
is true, leaving what's been heard once 'off the %record
'.
You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:
...
grep !(exists $record{$_} || undef $record{$_}), @record;
That, however, might lead to some confusion.
And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:
...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
add a comment |
Previous answers pretty much summarize the possible ways of accomplishing this task.
However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.
my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;
Note that the previously suggested grep !$seen{$_}++ ...
increments $seen{$_}
before negating, so the increment occurs regardless of whether it has already been %seen
or not. The above, however, short-circuits when $record{$_}
is true, leaving what's been heard once 'off the %record
'.
You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:
...
grep !(exists $record{$_} || undef $record{$_}), @record;
That, however, might lead to some confusion.
And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:
...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
add a comment |
Previous answers pretty much summarize the possible ways of accomplishing this task.
However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.
my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;
Note that the previously suggested grep !$seen{$_}++ ...
increments $seen{$_}
before negating, so the increment occurs regardless of whether it has already been %seen
or not. The above, however, short-circuits when $record{$_}
is true, leaving what's been heard once 'off the %record
'.
You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:
...
grep !(exists $record{$_} || undef $record{$_}), @record;
That, however, might lead to some confusion.
And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:
...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
Previous answers pretty much summarize the possible ways of accomplishing this task.
However, I suggest a modification for those who don't care about counting the duplicates, but do care about order.
my @record = qw( yeah I mean uh right right uh yeah so well right I maybe );
my %record;
print grep !$record{$_} && ++$record{$_}, @record;
Note that the previously suggested grep !$seen{$_}++ ...
increments $seen{$_}
before negating, so the increment occurs regardless of whether it has already been %seen
or not. The above, however, short-circuits when $record{$_}
is true, leaving what's been heard once 'off the %record
'.
You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys:
...
grep !(exists $record{$_} || undef $record{$_}), @record;
That, however, might lead to some confusion.
And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned:
...
undef @record{@record};
keys %record; # your record, now probably scrambled but at least deduped
edited Jan 2 at 2:32
answered Jan 2 at 0:38
YenForYangYenForYang
13818
13818
add a comment |
add a comment |
protected by Community♦ Mar 13 '12 at 9:50
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?