Replies: 4 comments 2 replies
-
I believe there are a few calls made on the benchmark_results object (e.g. join_revisions) that need to happen before you can make it a table (otherwise you will get duplicate model entries). Note that you can also just download the table from the leaderboard, which you can also run locally using: from mteb.leaderboard import demo
demo.launch() That way to get the exact processing of the leaderboard (though I agree that it is not quite as convenient). |
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick response @KennethEnevoldsen! I'm mainly interested in viewing the leaderboard scores of models I'm developing before actually submitting them to MTEB, so viewing the existing leaderboard doesn't help there unfortunately. I will have a look at the calls that need to be made. Would you be interested in having this functionality in MTEB at some point, for example, a single function that can be used to reproduce the MTEB leaderboard tables from |
Beta Was this translation helpful? Give feedback.
-
You should try: benchmark_results = benchmark_results.join_revisions() |
Beta Was this translation helpful? Give feedback.
-
Thanks @KennethEnevoldsen and @x-tabdeveloping! I've managed to figure it out. I've created a PR as well: #2015. |
Beta Was this translation helpful? Give feedback.
-
Hi!
I'm trying to reproduce the leaderboard tables locally, but running into some issues. Version:
mteb==1.34.4
. I'm using the following code:Where the results repo is a fork of https://github.com/embeddings-benchmark/results containing a subset of the existing model results.
I get the following error:
I could not find a simple script in the docs to do this, and from looking at the new leaderboard code, this looks like what's happening on the live leaderboard. Is there an extra step I'm missing? Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions