As scientific practice becomes increasingly data-intensive, researchers depend on repositories and portals to locate datasets that are not only topically related to their tasks but also trustworthy, accessible, and reusable. Prior work on scientific data relevance has shown that users employ multiple criteria—including topicality, authority, quality, accessibility, and utility—in structured sequences when deciding whether a dataset is relevant. However, current dataset search engines largely rely on keyword-based ranking over metadata and rarely operationalize such user-oriented models in their ranking algorithms. This paper proposes RC-Rank, a user-oriented multi-criteria ranking framework for scientific data search that explicitly encodes relevance criteria into the ranking function. RC-Rank groups features into criterion-specific channels, defines a principled aggregation scheme over these channels, and learns criterion weights and feature parameters from user interaction data. We outline a practical feature design for common scientific data portals, present a criterion-aware scoring formulation, and illustrate the behavior of RC-Rank on a synthetic case study. We further describe an evaluation protocol combining offline learning-to-rank experiments with user studies to assess both retrieval effectiveness and perceived alignment with human relevance judgments. The framework bridges conceptual models of user relevance with deployable ranking algorithms, and provides a basis for building cognition-friendly dataset search systems.
As scientific practice becomes increasingly data-intensive, researchers depend on repositories and portals to locate datasets that are not only topically related to their tasks but also trustworthy, accessible, and reusable. Prior work on scientific data relevance has shown that users employ multiple criteria—including topicality, authority, quality, accessibility, and utility—in structured sequences when deciding whether a dataset is relevant. However, current dataset search engines largely rely on keyword-based ranking over metadata and rarely operationalize such user-oriented models in their ranking algorithms. This paper proposes RC-Rank, a user-oriented multi-criteria ranking framework for scientific data search that explicitly encodes relevance criteria into the ranking function. RC-Rank groups features into criterion-specific channels, defines a principled aggregation scheme over these channels, and learns criterion weights and feature parameters from user interaction data. We outline a practical feature design for common scientific data portals, present a criterion-aware scoring formulation, and illustrate the behavior of RC-Rank on a synthetic case study. We further describe an evaluation protocol combining offline learning-to-rank experiments with user studies to assess both retrieval effectiveness and perceived alignment with human relevance judgments. The framework bridges conceptual models of user relevance with deployable ranking algorithms, and provides a basis for building cognition-friendly dataset search systems.