Skip to main content
SC&I’s Jiqun Liu and Chirag Shah Develop a Faceted Framework for User Study Design
Liu, a doctoral candidate, and Shah, associate professor of library and information science, have developed a faceted framework for supporting user study design, reporting, and evaluation.
SC&I’s Jiqun Liu and Chirag Shah Develop a Faceted Framework for User Study Design

Last month, SC&I doctoral candidate Jiqun Liu and  Associate Professor of Library and Information Science Chirag Shah published the book  “Interactive IR User Study Design, Evaluation, and Reporting.” SC&I conducted a Q&A with Liu to learn more about the study and their research on people’s information seeking and search activities. 

What did you study and why?

As a researcher, I study how people's problematic situations, tasks, and information seeking intentions are connected in search interactions and what this means for the design and evaluation of user-centered interactive search systems. To do this, methodologically, we should have a deeper understanding of the available methods, tools, and study procedures that we can use to study information seeking and search activities. This book seeks to at least partially address this problem by developing a standard, generalizable framework that can help young researchers identify the main facets of user studies and understand the connections and "collaborations" among these facets.

How did you conduct the study?

We developed a faceted framework for supporting user study design, reporting, and evaluation based on a systematic review of state-of-the-art interactive information retrieval (IIR) research papers recently published in several top IR venues (n=462).

What were the key findings?

1) We identified three major types of research focuses: understanding user behavior and experience, system/interface features evaluation, and meta-evaluation of evaluation metrics.

2) Based on the basic research focuses, through paper coding analysis, we extracted and summarized facet values from specific cases, and highlighted the under-reported user study components which may significantly affect of results of search.

3) We developed a faceted framework and employed it in evaluating a series of IIR user studies against their respective research questions and explained the roles and impacts of the underlying connections and "collaborations" among different facet values.

How are the findings impactful for the greater world at large?

1) Through bridging diverse combinations of facet values with the study design decisions made for addressing various research problems, the faceted framework can shed light on IIR user study design, reporting, and evaluation practices and help students and young researchers design and assess their own studies.

2) The faceted framework can go beyond IIR community and be applied in other highly relevant research areas, including human-computer interaction, computational social science, management information systems, and so on.

What are the next steps? Is there something practical/applicable the general public can do with the information?

For ordinary system users and practitioners in the IT industry, I think many of the facets and components highlighted in this work should be taken into consideration when evaluating a new application, interface, and/or system.

What do you hope people take away from this work?

I think the main takeaway message is that there is no perfect study design decision. To address a given research problem or question, researchers need to find the balance among different facet values (leverage the available methods to answer the research questions and also reduce the negative impacts from some of the methods), understand the interactions among different facets (e.g., how different facets and components can "collaborate" and jointly help answer the research question), and finally make the appropriate study design compromise.

What are your plans next? Anything else in the works?

The next steps include:

1) Turn the user studies evaluation ideas and principles into quantifiable, scalable evaluation metrics for larger-scale experiments.

2) Keep exploring new facets, components, and techniques applied in recently published user studies.

3) Establish new platforms (e.g., paper submission tracks, workshops) for reporting replication studies and unexpected results.

4) Explore new methods and approaches for applying the knowledge learned from carefully designed user studies in large-scale log-based analysis.

Do you have anything else to add? Anyone to thank?

We are grateful to Dr. Gary Marchionini for his tremendous support on this work. We also greatly appreciate the insightful comments and suggestions provided by Drs. Nicholas J. Belkin, Max L. Wilson, Heather O'Brien, and Jacek Gwizdka. Finally, we are thankful for the help and support of Diane Cerra from Morgan & Claypool Publishers. Our studies that inspired and empirically supported this work were funded by the National Science Foundation (NSF) grants IIS-1423239 and IIS-1717488. Some of the recently published relevant works are listed here:

  • Liu, J. & Shah, C. (2019). Proactive identification of query failure. In Proceedings of the 82nd Annual Meeting of the Association for Information Science and Technology (ASIS&T’19), Melbourne, Australia, Oct. 19-23.
  • Liu, J., Mitsui, M., Belkin, N.J., & Shah, C. (2019). Task, information seeking intention, and user behavior: Toward a multi-level understanding of Web search. In Proceedings of the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR’19). Glasgow, United Kingdom, March 10-14.
  • Liu, J. & Shah, C. (2019). Investigating the impacts of expectation disconfirmation on Web search. In Proceedings of the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR’19). Glasgow, United Kingdom, March 10-14.
  • Liu, J., Wang, Y., Mandal, S., & Shah, C. (2019). Exploring the immediate and short-term effects of peer advice and cognitive authority on Web search behavior. Information Processing and Management, 56(3), 1010-1025.
  • Mitsui, M., Liu, J., & Shah, C. (2018). How much is too much? Whole session vs. first query behaviors in task prediction. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’18), Ann Arbor, MI, July 8-12.
  • Liu, J. (2017). Toward a unified model of human information behavior: An equilibrium perspective. Journal of Documentation, 73(4), 666-688.
  • Mitsui, M., Liu, J., Belkin, N.J., Shah, C. (2017). Predicting information seeking intentions from search behaviors. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), Tokyo, Japan, Aug. 7-11.

To learn more about the Department of Library and Information Science at the Rutgers School of Communication and Information (SC&I), click here

Back to top