Skip to main content

Accelerated Preference Elicitation with LLM-Based Proxies

Bidders in combinatorial auctions face significant challenges when describing their preferences to an auctioneer. Classical work on preference elicitation focuses on query-based techniques inspired from proper learning—often via proxies interfacing between bidders and an auction mechanism—to incrementally learn bidder preferences as needed to compute efficient allocations. Although such elicitation mechanisms enjoy theoretical query efficiency, the amount of communication required may still be too cognitively taxing in practice.

We propose a family of efficient LLM-based proxy designs for eliciting preferences from bidders using natural language. Our proposed mechanism combines LLM pipelines and DNF-proper-learning techniques to quickly approximate preferences when communication is limited. To validate our approach, we create a testing sandbox for elicitation mechanisms that communicate in natural language. In our experiments, our most promising LLM proxy design reaches approximately efficient outcomes with five times fewer queries than classical proper learning based elicitation mechanisms.

Partner with research

Investing in and contributing to Input Output Research means supporting one of the most rigorous and peer-reviewed blockchain R&D efforts in the world. Our work bridges academia and industry, advancing decentralization, security and scalability while creating open knowledge that benefits the entire ecosystem. Whether through funding, collaboration, or partnership, contributors play a vital role in shaping innovations that are ethical, impactful and built to endure.