Replies: 4 comments 6 replies
-
You can find example scripts in readme Lines 41 to 97 in 4d23c6c |
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick response. Actually, I am looking for the evaluation script for the whole MMTEB (the benchmark, not one task from MTEB). It looks the pointed file only includes the evaluation of a task in MTEB. |
Beta Was this translation helpful? Give feedback.
-
Similarly you can run import mteb
benchmark = mteb.get_benchmark("MTEB(Multilingual, v1)")
evaluation = mteb.MTEB(tasks=benchmark)
evaluation.run(model) |
Beta Was this translation helpful? Give feedback.
-
@KennethEnevoldsen One follow-up question: in the MMTEB leaderboard, there are several models need detailed instructions for each task (e,g, f'Instruct: {task_description}\nQuery: {query}' for [multilingual-e5-large-instruct]). Could you show some suggestions how to do MMEB evaluation on these models? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi, recently, I found your MMTEB work. The benchmark looks good. I would like to evaluate some models on this benchmark. Can you share a sample script for the evaluation? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions