Inspired by @geocar, here’s the J version. It assumes the values are stored in an Nx2 array where the first column is the score and the second is the id:

r ,. >./\(* >:@i.@#) ~: 0{"1 r =: \:~ x

This makes the ranking column three. There’s probably a more elegant way to do this but I’m on my phone rn. Breakdown:

Sort the participants by score, descending

Get the “nub seive”, the bitstring corresponding to the indices with the first occurrence of each element

Multiply the bitstring by the index array. First occurrences become their index, duplicates remain zero.

Get the running maximum of the list, which replaces zeros with the index of the first occurrence

Here’s another Elixir implementation, similar to @hauleth’s!

By using Enum.group_by/2 up front, our reduce step is a little simpler, because the rank of every set of data is the rank of the prior set plus the length of the prior set. I also changed the output a little bit, it’s just a map whose keys are the ranks, and the values are the data at that rank.

defmodule Ranking do
@mock_data [
%{id: 1, points: 39},
%{id: 2, points: 26},
%{id: 3, points: 50},
%{id: 4, points: 24},
%{id: 5, points: 39},
%{id: 6, points: 67},
%{id: 7, points: 39},
%{id: 8, points: 50},
%{id: 9, points: 48},
%{id: 10, points: 24}
]
def mock_data do
@mock_data
end
def rank(data) do
data
# Gives us a list of {points, data} tuples
|> Enum.group_by(& &1.points)
|> Enum.sort(&(elem(&1, 0) > elem(&2, 0)))
|> Enum.reduce({1, %{}}, fn {_points, data}, {next_rank, ranking} ->
{next_rank + length(data), Map.put(ranking, next_rank, data)}
end)
|> elem(1)
end
end
Ranking.mock_data()
|> Ranking.rank()
|> IO.inspect()

And here’s a similar approach in JavaScript:

const mockData = [
{id: 1, points: 39},
{id: 2, points: 26},
{id: 3, points: 50},
{id: 4, points: 24},
{id: 5, points: 39},
{id: 6, points: 67},
{id: 7, points: 39},
{id: 8, points: 50},
{id: 9, points: 48},
{id: 10, points: 24}
]
function rank(data) {
const grouped = data.reduce((grouped, datum) => {
if (!grouped.get(datum.points)) grouped.set(datum.points, [])
grouped.get(datum.points).push(datum)
return grouped
}, new Map()) // Use a Map so we don't have to parseInt on object keys
const sorted = Array.from(grouped.entries()).sort(([a], [b]) => b - a)
return sorted.reduce(([nextRank, ranking], [, data]) => {
ranking[nextRank] = data
return [nextRank + data.length, ranking]
}, [1, {}])[1]
}
console.log(JSON.stringify(rank(mockData), null, 2))

My problem with that solution is that it needlessly iterates through all data at least 3 times (1 for Enum.group_by/2, 1 for sort/2, and 1 for length/1 inside Enum.reduce/3). This can be a little bit of the problem.

I think the original solution linked iterates over the data twice (sorting, then the iteration over each item using recursion). This solution is at worst once over the data to group by score, then twice over the groups, which is indeed worst-case an extra iteration than the linked solution if each piece of the data has a unique point value within the data. It only iterates over all the data 3 times in this case (each piece of data has a unique point value). The second and third iterations are actually sized by the unique point values in the data set. They’re both still O(n), though.

Nice trick with that Enum.sort_by/2. I used tuple in my solution to not need to handle item, nil as a separate case. I thought about using pattern match in the closure, but I thought that separate helper function will be clearer.

Here you have much smaller, and IMHO much more readable example of the same:

EDIT: Earlier version was slightly buggy, as it didn’t returned ranks always increased by 1, this fixes it.

Cool! Thanks for sharing :)

Inspired by @geocar, here’s the J version. It assumes the values are stored in an Nx2 array where the first column is the score and the second is the id:

This makes the ranking column three. There’s probably a more elegant way to do this but I’m on my phone rn. Breakdown:

Perl5:

That encouraged me to do it in (mostly) AWK:

Here’s another Elixir implementation, similar to @hauleth’s!

By using

`Enum.group_by/2`

up front, our reduce step is a little simpler, because the rank of every set of data is the rank of the prior set plus the length of the prior set. I also changed the output a little bit, it’s just a map whose keys are the ranks, and the values are the data at that rank.And here’s a similar approach in JavaScript:

My problem with that solution is that it needlessly iterates through all data at least 3 times (1 for

`Enum.group_by/2`

, 1 for`sort/2`

, and 1 for`length/1`

inside`Enum.reduce/3`

). This can be a little bit of the problem.That’s true! A good point.

I think the original solution linked iterates over the data twice (sorting, then the iteration over each item using recursion). This solution is at worst once over the data to group by score, then twice over the groups, which is indeed worst-case an extra iteration than the linked solution if each piece of the data has a unique point value within the data. It only iterates over all the data 3 times in this case (each piece of data has a unique point value). The second and third iterations are actually sized by the unique point values in the data set. They’re both still O(n), though.

My solution:

Nice trick with that

`Enum.sort_by/2`

. I used tuple in my solution to not need to handle`item, nil`

as a separate case. I thought about using pattern match in the closure, but I thought that separate helper function will be clearer.Come on guys, there are data structures for this sort of thing…

Challenge accepted!