A Study of Language Models for Exploiting User Feedback in Information Retrieval (Paperback)


Feedback is an important technique in Information Retrieval to have users provide contextual information about their search needs, with the goal of improving retrieval accuracy and achieving personalization. Relevance feedback has been studied extensively, and in recent years new types of feedback such as implicit feedback and collective feedback have attracted much research interest. However, there are not many works exploring language modeling techniques for feedback. In this thesis, I study how to use language models to exploit user feedback, including long-term implicit feedback and short-term explicit term-based feedback. I show that language models have unique advantages in modeling users search interests and preferences in the long-term, as well as capturing term and sub-topic relevance in the short-term. In particular, I first study exploiting implicit feedback from a user's long-term search log. Language models are constructed to represent both information needs in past searches and history context for new searches. These history language models capture the user's search interests and preferences, thus can help personalize search results for new queries. Not only that, by selecting topics in a user's long-term search history that represent their long-lasting exploratory interests and building language models for these topics, the user can receive personalized recommendation of new information without a query. I also study term-based explicit feedback that deviates from traditional document-based relevant feedback. By modeling query sub-topics using language models and asking the user for term-level relevance judgments, both term and sub-topic level relevance can be incorporated into a new query model that improves retrieval accuracy. Finally, I have designed the UCAIR system to support development, deployment and evaluation of feedback algorithms for personalized search and recommendation. The system not only implements some of the previously proposed algorithms but also provides a highly reusable and easily extensible platform for designing and testing new feedback algorithms in the language modeling framework. It is hoped that this system will help reduce the difficulty of building personalized feedback systems and generating feedback evaluation data sets.

R2,035

Or split into 4x interest-free payments of 25% on orders over R50
Learn more

Discovery Miles20350
Mobicred@R191pm x 12* Mobicred Info
Free Delivery
Delivery AdviceOut of stock

Toggle WishListAdd to wish list
Review this Item

Product Description

Feedback is an important technique in Information Retrieval to have users provide contextual information about their search needs, with the goal of improving retrieval accuracy and achieving personalization. Relevance feedback has been studied extensively, and in recent years new types of feedback such as implicit feedback and collective feedback have attracted much research interest. However, there are not many works exploring language modeling techniques for feedback. In this thesis, I study how to use language models to exploit user feedback, including long-term implicit feedback and short-term explicit term-based feedback. I show that language models have unique advantages in modeling users search interests and preferences in the long-term, as well as capturing term and sub-topic relevance in the short-term. In particular, I first study exploiting implicit feedback from a user's long-term search log. Language models are constructed to represent both information needs in past searches and history context for new searches. These history language models capture the user's search interests and preferences, thus can help personalize search results for new queries. Not only that, by selecting topics in a user's long-term search history that represent their long-lasting exploratory interests and building language models for these topics, the user can receive personalized recommendation of new information without a query. I also study term-based explicit feedback that deviates from traditional document-based relevant feedback. By modeling query sub-topics using language models and asking the user for term-level relevance judgments, both term and sub-topic level relevance can be incorporated into a new query model that improves retrieval accuracy. Finally, I have designed the UCAIR system to support development, deployment and evaluation of feedback algorithms for personalized search and recommendation. The system not only implements some of the previously proposed algorithms but also provides a highly reusable and easily extensible platform for designing and testing new feedback algorithms in the language modeling framework. It is hoped that this system will help reduce the difficulty of building personalized feedback systems and generating feedback evaluation data sets.

Customer Reviews

No reviews or ratings yet - be the first to create one!

Product Details

General

Imprint

Proquest, Umi Dissertation Publishing

Country of origin

United States

Release date

September 2011

Availability

Supplier out of stock. If you add this item to your wish list we will let you know when it becomes available.

First published

September 2011

Authors

Dimensions

254 x 203 x 7mm (L x W x T)

Format

Paperback - Trade

Pages

106

ISBN-13

978-1-243-75215-4

Barcode

9781243752154

Categories

LSN

1-243-75215-7



Trending On Loot