Removing duplicates in lists
Here's the quick way to eliminate duplicates in order from a list using set comprehension:
Voila! The deduped
list: [1, 2, 3] is spic and span, maintaining the integrity of the original
.
When items are unhashable
When our list is a party full of unhashable items such as nested lists or dictionaries, a nested loop approach is a lifesaver:
Now, using remove_duplicates(your_nested_list)
, we get a squeaky clean list.
Order preservation and performance
From Python 3.8+, order's the new black. Dictionary keys now maintain their orders:
The deduped
list here ends up being [4, 5, 6] keeping its original order. It's like magic, but it's just Python doing what it does best.
Creating reusable functions
Keep it DRY—Don't Repeat Yourself. Here's how to create a helper function to deduplicate any list:
This function can be reused throughout your code, promoting readability and maintainability.
Beyond simple lists: Complex scenarios
For more intricate scenarios, where list contains custom objects or you need to remove duplicates based on object attributes, you might need to flex your Python muscles a bit more.
Also, you can pull a masterstroke by using groupby
function from itertools
module with sorted lists:
Just like that, deduped
gives you, ['apple', 'banana'].
Was this article helpful?