Overview

The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues. The workshop will have
both an in-person and virtual component, and submissions are welcome even
for researchers who cannot attend in person, as they will present their
work in the virtual component.
Topics of interest

We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:

   - LLM-based evaluation metrics for traditional IR and generative IR.
   - Agreement between human and LLM labels.
   - Effectiveness and/or efficiency of LLMs to produce robust relevance
   labels.
   - Investigating LLM-based relevance estimators for potential systemic
   biases.
   - Automated evaluation of text generation systems.
   - End-to-end evaluation of Retrieval Augmented Generation systems.
   - Trustworthiness in the world of LLMs evaluation.
   - Prompt engineering in LLMs evaluation.
   - Effectiveness and/or efficiency of LLMs as ranking models.
   - LLMs in specific IR tasks such as personalized search, conversational
   search, and multimodal retrieval.
   - Challenges and future directions in LLM-based IR evaluation.

Submission guidelines

We welcome the following submissions:

   - Previously unpublished manuscripts will be accepted as extended
   abstracts and full papers (any length between 1 - 9 pages) with unlimited
   references, formatted according to the latest ACM SIG proceedings template
   available at http://www.acm.org/publications/proceedings-template.
   - Published manuscripts can be submitted in their original format.

All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval

All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).

Please note the workshop will have an in-person (to be held with SIGIR
2024) and virtual component (to be held at a later date on SIGIR VF).
During submission, the authors should select their preferred component. All
accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.
Important Dates

   - Submission Deadline: April 25th, 2024 (AoE time)
   - Acceptance Notifications: May 31st, 2024 (AoE time)
   - Workshop date: July 18, 2024

Website and Contact

More details are available at https://llm4eval.github.io/cfp/.
For any questions about paper submission, you may contact the workshop
organizers at llm4e...@easychair.org
_______________________________________________
Corpora mailing list -- corpora@list.elra.info
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to corpora-le...@list.elra.info

Reply via email to