98年属什么| 近水楼台是什么意思| 地什么人什么| 梁下放床有什么禁忌| 双职工是什么意思| 性病是什么病| 下聘是什么意思| 吃什么补血| 残疾证有什么补贴| 口加一笔变成什么字| 肩胛骨缝疼吃什么药| 令坦是对方什么人的尊称| 什么什么自如| 什么水晶招财旺事业| 四战之地的生肖是什么| 天生丽质什么意思| 子年是什么年| 股骨头坏死挂什么科| 喝藿香正气水不能吃什么| ibs是什么单位| 什么是活珠子| 什么东西补钙最好最快| 2001年什么年| 质询是什么意思| 咽喉疼痛吃什么药好| 什么床最环保没甲醛| 肠炎吃什么食物| 慢性胰腺炎吃什么药效果最好| 低压太低是什么原因| 众里寻他千百度是什么意思| 什么动物怕热| blossom是什么意思| 肾彩超能查出什么| 巴适什么意思| 蝉联的意思是什么| 什么瓜不能吃脑筋急转弯| 内衣34是什么码| 脂质是什么| 血小板高是什么原因| 循环系统包括什么| 牛肉馅饺子配什么菜| 什么是早孕| 幽门螺旋杆菌吃什么药| 什么叫私人会所| 肠溶片和缓释片有什么区别| camp是什么| 三长两短是什么意思| 无极是什么意思| 气血不足什么原因引起的| 淋巴净排是什么服务| 槐米是什么| 黄片是什么| 挂了是什么意思| 712什么星座| 女生无缘无故头疼是什么原因| 儿保科主要是检查什么| 痛经是什么| 赶的偏旁是什么| 耳鸣用什么药治疗效果最好| 白醋和陈醋有什么区别| 男人吃香菜有什么好处| 规培证有什么用| 香蕉不能和什么水果一起吃| 颈椎病吃什么药最好效果| 梦到女儿死了是什么意思| 甲状腺不能吃什么| 神经病和精神病有什么区别| 苦瓜和什么搭配最好| 坐久了脚肿是什么原因| 粉红的什么| 卡替治疗是什么意思| 思维是什么意思| 梦到乌龟是什么意思| 唏嘘不已的意思是什么| 皮肤过敏有什么妙招| 旖旎什么意思| 足金什么意思| mup是什么意思| 急性阑尾炎吃什么药| 布病是什么病| 孩子晚上睡觉磨牙是什么原因| 掉头发是缺什么维生素| 吃什么能胖起来| 痰多是什么原因造成的| 吃黄瓜对身体有什么好处| 感冒喉咙痛挂什么科| 金牛后面是什么星座| 昭觉寺求什么最灵验| 木木耳朵旁是什么字| 什么时候放开二胎政策| 梦见杀鸡是什么预兆| 化妆棉是干什么用的| 什么是僵尸肉| 接骨木是什么| 前胸后背出汗多是什么原因| 同归于尽是什么意思| 2009年是什么生肖年| 可怜巴巴是什么意思| 阴道痒是什么原因| 尿毒症挂什么科| 日斤念什么字| 蚕屎有什么作用和功效| 每天吃一个鸡蛋有什么好处| ana谱是查什么病的| 冷暴力是什么| 肌醇是什么东西| 4.22什么星座| 申酉是什么时间| 肾小球有什么作用| 办护照需要什么证件| 什么的珍珠| vivi是什么意思| 牙周炎吃什么药效果好| 水黄是什么原因| 视网膜脱落是什么原因引起的| 春天有什么动物| 梦到自己长白头发是什么意思| 神经根型颈椎病吃什么药| 鸦雀无声是什么意思| b型血的孩子父母是什么血型| 葵水是什么| 五台山在什么地方| baby是什么意思| 耳石症是什么| 经常生闷气会得什么病| 漉是什么意思| 肾漏蛋白是什么病| 汉高祖叫什么名字| 九月二十三是什么星座| 玉镯子断了有什么预兆| 白带黄色是什么原因| 小孩子长白头发是什么原因| 沙特是什么教派| 吃什么减肥效果最快| 养成系是什么意思| 湿气重的人不能吃什么| 单亲家庭是什么意思| 喘不上气是什么原因| 为什么不能空腹吃香蕉| 白兰地是什么| 二手房是什么意思| 黄皮是什么| 吃月饼是什么节日| 什么叫人彘| 肝瘘是什么| 经常拉肚子是什么原因| 大便特别臭是什么原因| 脚趾甲发白是什么原因| 公举是什么意思啊| 尿道感染吃什么药好得快| 凤毛麟角是什么生肖| 武夷岩茶是什么茶| 铁皮石斛治什么病| 静脉曲张是什么病| 身先士卒是什么意思| 外甥女是什么关系| 一见如什么| 男人第一次什么 感觉| 铁皮石斛适合什么人吃| 男人为什么会出轨| 维生素b2有什么作用和功效| 想字五行属什么| 嘴唇干裂是什么原因引起的| 退烧药吃什么| 丝瓜不能和什么一起吃| na医学上是什么意思| 解酒的酶是什么酶| 什么是纸片人| 为什么受伤总是我| pq是什么意思| 有编制是什么意思| 吃什么食物补肾最快| 文气是什么意思| 天秤座和什么星座最配| 儿童流鼻血挂什么科| 请多指教是什么意思| 去肝火喝什么茶效果最好| 硫酸亚铁是什么东西| 甘之如饴什么意思| 阁老相当于现在什么官| 安宫牛黄丸有什么作用| 七月份能种什么菜| 什么药降肌酐最快最好| 三顾茅庐什么意思| 人心隔肚皮什么意思| 孩子脚后跟疼是什么原因| 疏离感是什么意思| 肺坠积性改变什么意思| 鲶鱼效应是什么意思| 午夜是什么时候| 92年1月属什么生肖| 油嘴滑舌是什么意思| 肚脐眼连着什么器官| 金脸银脸代表什么人物| 雷声什么| 养流浪猫需要注意什么| 禀赋是什么意思| 早上起来后背疼是什么原因| 尿分叉是什么原因引起的| 为什么胆固醇高| 15天来一次月经是什么原因| 众叛亲离是什么意思| hfp是什么意思| 为什么会得梅毒| 女生被口是什么感觉| 建档需要准备什么资料| 阴影是什么意思| 风什么浪什么| 海棠是什么| 心血管堵塞吃什么好| 后背疼是什么原因| 献完血吃什么东西补血| 金匮肾气丸主治什么病| 失去自我是什么意思| 蝈蝈是什么动物| 春天有什么植物| 热脸贴冷屁股是什么意思| 胆囊手术后不能吃什么| 便是什么意思| 降血压吃什么| 龙骨为什么比排骨便宜| 绿茶有什么好处| 夏天木瓜煲什么汤最好| 导语是什么意思| 八月初十是什么星座| 彼岸花代表什么星座| 7月9日是什么星座| 新生儿白细胞高是什么原因| 女孩月经不规律是什么原因| 多多保重是什么生肖| 海王星是什么颜色| sg是什么意思| 脚掌疼是什么原因| 刀口力念什么| 兹有是什么意思| 一个木一个号念什么| 蔬菜沙拉一般用什么蔬菜| 荆州是现在的什么地方| 什么人不能吃黄精| 肝胆相照是什么生肖| 待我长发及腰时下一句是什么| 猫眼石是什么| 夏至节气吃什么| 吃什么能安神助睡眠| 全光谱是什么意思| 口气臭吃什么能改善| 小米是什么米| 贫血喝什么口服液| 麟是什么意思| 减肥为什么让早上空腹喝咖啡| 情人节送什么花| 颈椎轻度退行性变是什么意思| 签注什么意思| 大象的鼻子有什么作用| 养生馆是干什么的| 什么是纤维| 出水痘不能吃什么食物| 12月2日什么星座| 5月24号是什么星座| 淋巴结是什么病| 王安石号什么| 纯色是什么意思| 静水流深什么意思| 耳机降噪是什么意思| 百度Jump to content

梦见胡萝卜是什么意思

From Wikipedia, the free encyclopedia
百度 同时,此举恐怕也有为即将到来的国会中期选举注入强心剂的考虑。

Learning to rank[1] or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems.[2] Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.

Applications

[edit]

In information retrieval

[edit]
A possible architecture of a machine-learned search engine

Ranking is a central part of many information retrieval problems, such as document retrieval, collaborative filtering, sentiment analysis, and online advertising.

A possible architecture of a machine-learned search engine is shown in the accompanying figure.

Training data consists of queries and documents matching them together with the relevance degree of each match. It may be prepared manually by human assessors (or raters, as Google calls them), who check results for some queries and determine relevance of each result. It is not feasible to check the relevance of all documents, and so typically a technique called pooling is used — only the top few documents, retrieved by some existing ranking models are checked. This technique may introduce selection bias. Alternatively, training data may be derived automatically by analyzing clickthrough logs (i.e. search results which got clicks from users),[3] query chains,[4] or such search engines' features as Google's (since-replaced) SearchWiki. Clickthrough logs can be biased by the tendency of users to click on the top search results on the assumption that they are already well-ranked.

Training data is used by a learning algorithm to produce a ranking model which computes the relevance of documents for actual queries.

Typically, users expect a search query to complete in a short time (such as a few hundred milliseconds for web search), which makes it impossible to evaluate a complex ranking model on each document in the corpus, and so a two-phase scheme is used.[5] First, a small number of potentially relevant documents are identified using simpler retrieval models which permit fast query evaluation, such as the vector space model, Boolean model, weighted AND,[6] or BM25. This phase is called top- document retrieval and many heuristics were proposed in the literature to accelerate it, such as using a document's static quality score and tiered indexes.[7] In the second phase, a more accurate but computationally expensive machine-learned model is used to re-rank these documents.

In other areas

[edit]

Learning to rank algorithms have been applied in areas other than information retrieval:

Feature vectors

[edit]

For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are called feature vectors. Such an approach is sometimes called bag of features and is analogous to the bag of words model and vector space model used in information retrieval for representation of documents.

Components of such vectors are called features, factors or ranking signals. They may be divided into three groups (features from document retrieval are shown as examples):

  • Query-independent or static features — those features, which depend only on the document, but not on the query. For example, PageRank or document's length. Such features can be precomputed in off-line mode during indexing. They may be used to compute document's static quality score (or static rank), which is often used to speed up search query evaluation.[7][10]
  • Query-dependent or dynamic features — those features, which depend both on the contents of the document and the query, such as TF-IDF score or other non-machine-learned ranking functions.
  • Query-level features or query features, which depend only on the query. For example, the number of words in a query.

Some examples of features, which were used in the well-known LETOR dataset:

Selecting and designing good features is an important area in machine learning, which is called feature engineering.

Evaluation measures

[edit]

There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics.

Examples of ranking quality measures:

DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used.[11] Other metrics such as MAP, MRR and precision, are defined only for binary judgments.

Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric:

Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.

Approaches

[edit]

Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered).

Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval.[1] He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets.[14]

In this section, without further notice, denotes an object to be evaluated, for example, a document or an image, denotes a single-value hypothesis, denotes a bi-variate or multi-variate function and denotes the loss function.

Pointwise approach

[edit]

In this case, it is assumed that each query-document pair in the training data has a numerical or ordinal score. Then the learning-to-rank problem can be approximated by a regression problem — given a single query-document pair, predict its score. Formally speaking, the pointwise approach aims at learning a function predicting the real-value or ordinal score of a document using the loss function .

A number of existing supervised machine learning algorithms can be readily used for this purpose. Ordinal regression and classification algorithms can also be used in pointwise approach when they are used to predict the score of a single query-document pair, and it takes a small, finite number of values.

Pairwise approach

[edit]

In this case, the learning-to-rank problem is approximated by a classification problem — learning a binary classifier that can tell which document is better in a given pair of documents. The classifier shall take two documents as its input and the goal is to minimize a loss function . The loss function typically reflects the number and magnitude of inversions in the induced ranking.

In many cases, the binary classifier is implemented with a scoring function . As an example, RankNet [15] adapts a probability model and defines as the estimated probability of the document has higher quality than :

where is a cumulative distribution function, for example, the standard logistic CDF, i.e.

Listwise approach

[edit]

These algorithms try to directly optimize the value of one of the above evaluation measures, averaged over all queries in the training data. This is often difficult in practice because most evaluation measures are not continuous functions with respect to ranking model's parameters, and so continuous approximations or bounds on evaluation measures have to be used. For example the SoftRank algorithm.[16] LambdaMART is a pairwise algorithm which has been empirically shown to approximate listwise objective functions.[17]

List of methods

[edit]

A partial list of published learning-to-rank algorithms is shown below with years of first publication of each method:

Year Name Type Notes
1989 OPRF[18] 2pointwise Polynomial regression (instead of machine learning, this work refers to pattern recognition, but the idea is the same).
1992 SLR[19] 2pointwise Staged logistic regression.
1994 NMOpt[20] 2listwise Non-Metric Optimization.
1999 MART (Multiple Additive Regression Trees)[21] 2pairwise
2000 Ranking SVM (RankSVM) 2pairwise A more recent exposition is in,[3] which describes an application to ranking using clickthrough logs.
2001 Pranking 1pointwise Ordinal regression.
2003 RankBoost 2pairwise
2005 RankNet 2pairwise
2006 IR-SVM[22] 2pairwise Ranking SVM with query-level normalization in the loss function.
2006 LambdaRank pairwise/listwise RankNet in which pairwise loss function is multiplied by the change in the IR metric caused by a swap.
2007 AdaRank[23] 3listwise
2007 FRank 2pairwise Based on RankNet, uses a different loss function - fidelity loss.
2007 GBRank 2pairwise
2007 ListNet 3listwise
2007 McRank 1pointwise
2007 QBRank 2pairwise
2007 RankCosine[24] 3listwise
2007 RankGP[25] 3listwise
2007 RankRLS 2pairwise

Regularized least-squares based ranking. The work is extended in [26] to learning to rank from general preference graphs.

2007 SVMmap 3listwise
2008 LambdaSMART/LambdaMART pairwise/listwise Winning entry in the Yahoo Learning to Rank competition in 2010, using an ensemble of LambdaMART models. Based on MART (1999)[27] “LambdaSMART”, for Lambda-submodel-MART, or LambdaMART for the case with no submodel.
2008 ListMLE[28] 3listwise Based on ListNet.
2008 PermuRank[29] 3listwise
2008 SoftRank[30] 3listwise
2008 Ranking Refinement[31] 2pairwise A semi-supervised approach to learning to rank that uses Boosting.
2008 SSRankBoost[32] 2pairwise An extension of RankBoost to learn with partially labeled data (semi-supervised learning to rank).
2008 SortNet[33] 2pairwise SortNet, an adaptive ranking algorithm which orders objects using a neural network as a comparator.
2009 MPBoost[34] 2pairwise Magnitude-preserving variant of RankBoost. The idea is that the more unequal are labels of a pair of documents, the harder should the algorithm try to rank them.
2009 BoltzRank 3listwise Unlike earlier methods, BoltzRank produces a ranking model that looks during query time not just at a single document, but also at pairs of documents.
2009 BayesRank 3listwise A method combines Plackett-Luce model and neural network to minimize the expected Bayes risk, related to NDCG, from the decision-making aspect.
2010 NDCG Boost[35] 3listwise A boosting approach to optimize NDCG.
2010 GBlend 2pairwise Extends GBRank to the learning-to-blend problem of jointly solving multiple learning-to-rank problems with some shared features.
2010 IntervalRank 2pairwise & listwise
2010 CRR[36] 2pointwise & pairwise Combined Regression and Ranking. Uses stochastic gradient descent to optimize a linear combination of a pointwise quadratic loss and a pairwise hinge loss from Ranking SVM.
2014 LCR[37] 2pairwise Applied local low-rank assumption on collaborative ranking. Received best student paper award at WWW'14.
2015 FaceNet pairwise Ranks face images with the triplet metric via deep convolutional network.
2016 XGBoost pairwise Supports various ranking objectives and evaluation metrics.
2017 ES-Rank[38] listwise Evolutionary Strategy Learning to Rank technique with 7 fitness evaluation metrics.
2018 DLCM[39] 2listwise A multi-variate ranking function that encodes multiple items from an initial ranked list (local context) with a recurrent neural network and create result ranking accordingly.
2018 PolyRank[40] pairwise Learns simultaneously the ranking and the underlying generative model from pairwise comparisons.
2018 FATE-Net/FETA-Net[41] listwise End-to-end trainable architectures, which explicitly take all items into account to model context effects.
2019 FastAP[42] listwise Optimizes Average Precision to learn deep embeddings.
2019 Mulberry listwise & hybrid Learns ranking policies maximizing multiple metrics across the entire dataset.
2019 DirectRanker pairwise Generalisation of the RankNet architecture.
2019 GSF[43] 2listwise A permutation-invariant multi-variate ranking function that encodes and ranks items with groupwise scoring functions built with deep neural networks.
2020 RaMBO[44] listwise Optimizes rank-based metrics using blackbox backpropagation.[45]
2020 PRM[46] pairwise Transformer network encoding both the dependencies among items and the interactions between the user and items.
2020 SetRank[47] 2listwise A permutation-invariant multi-variate ranking function that encodes and ranks items with self-attention networks.
2021 PiRank[48] listwise Differentiable surrogates for ranking able to exactly recover the desired metrics and scales favourably to large list sizes, significantly improving internet-scale benchmarks.
2022 SAS-Rank listwise Combining Simulated Annealing with Evolutionary Strategy for implicit and explicit learning to rank from relevance labels.
2022 VNS-Rank listwise Variable Neighborhood Search in 2 Novel Methodologies in AI for Learning to Rank.
2022 VNA-Rank listwise Combining Simulated Annealing with Variable Neighbourhood Search for Learning to Rank.
2023 GVN-Rank listwise Combining Gradient Ascent with Variable Neighbourhood Search for Learning to Rank.

Note: as most supervised learning-to-rank algorithms can be applied to pointwise, pairwise and listwise case, only those methods which are specifically designed with ranking in mind are shown above.

History

[edit]

Norbert Fuhr introduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation;[49] a specific variant of this approach (using polynomial regression) had been published by him three years earlier.[18] Bill Cooper proposed logistic regression for the same purpose in 1992 [19] and used it with his Berkeley research group to train a successful ranking function for TREC. Manning et al.[50] suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques.

Several conferences, such as NeurIPS, SIGIR and ICML have had workshops devoted to the learning-to-rank problem since the mid-2000s (decade).

Practical usage by search engines

[edit]

Commercial web search engines began using machine-learned ranking systems since the 2000s (decade). One of the first search engines to start using it was AltaVista (later its technology was acquired by Overture, and then Yahoo), which launched a gradient boosting-trained ranking function in April 2003.[51][52]

Bing's search is said to be powered by RankNet algorithm,[53][when?] which was invented at Microsoft Research in 2005.

In November 2009 a Russian search engine Yandex announced[54] that it had significantly increased its search quality due to deployment of a new proprietary MatrixNet algorithm, a variant of gradient boosting method which uses oblivious decision trees.[55] Recently they have also sponsored a machine-learned ranking competition "Internet Mathematics 2009"[56] based on their own search engine's production data. Yahoo has announced a similar competition in 2010.[57]

As of 2008, Google's Peter Norvig denied that their search engine exclusively relies on machine-learned ranking.[58] Cuil's CEO, Tom Costello, suggests that they prefer hand-built models because they can outperform machine-learned models when measured against metrics like click-through rate or time on landing page, which is because machine-learned models "learn what people say they like, not what people actually like".[59]

In January 2017, the technology was included in the open source search engine Apache Solr.[60] It is also available in the open source OpenSearch and Elasticsearch.[61][62] These implementations make learning to rank widely accessible for enterprise search.

Vulnerabilities

[edit]

Similar to recognition applications in computer vision, recent neural network based ranking algorithms are also found to be susceptible to covert adversarial attacks, both on the candidates and the queries.[63] With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations.[63][64]

Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense.[65]

See also

[edit]

References

[edit]
  1. ^ a b Tie-Yan Liu (2009), "Learning to Rank for Information Retrieval", Foundations and Trends in Information Retrieval, 3 (3): 225–331, doi:10.1561/1500000016, ISBN 978-1-60198-244-5. Slides from Tie-Yan Liu's talk at WWW 2009 conference are available online Archived 2025-08-07 at the Wayback Machine
  2. ^ Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
  3. ^ a b Joachims, T. (2002), "Optimizing Search Engines using Clickthrough Data" (PDF), Proceedings of the ACM Conference on Knowledge Discovery and Data Mining, archived (PDF) from the original on 2025-08-07, retrieved 2025-08-07
  4. ^ Joachims T.; Radlinski F. (2005), "Query Chains: Learning to Rank from Implicit Feedback" (PDF), Proceedings of the ACM Conference on Knowledge Discovery and Data Mining, arXiv:cs/0605035, Bibcode:2006cs........5035R, archived (PDF) from the original on 2025-08-07, retrieved 2025-08-07
  5. ^ B. Cambazoglu; H. Zaragoza; O. Chapelle; J. Chen; C. Liao; Z. Zheng; J. Degenhardt., "Early exit optimizations for additive machine learned ranking systems" (PDF), WSDM '10: Proceedings of the Third ACM International Conference on Web Search and Data Mining, 2010., archived from the original (PDF) on 2025-08-07, retrieved 2025-08-07
  6. ^ Broder A.; Carmel D.; Herscovici M.; Soffer A.; Zien J. (2003), "Efficient query evaluation using a two-level retrieval process", Proceedings of the twelfth international conference on Information and knowledge management (PDF), pp. 426–434, doi:10.1145/956863.956944, ISBN 978-1-58113-723-1, S2CID 2432701, archived from the original (PDF) on 2025-08-07, retrieved 2025-08-07
  7. ^ a b Manning C.; Raghavan P.; Schütze H. (2008), Introduction to Information Retrieval, Cambridge University Press. Section 7.1 Archived 2025-08-07 at the Wayback Machine
  8. ^ a b Kevin K. Duh (2009), Learning to Rank with Partially-Labeled Data (PDF), archived (PDF) from the original on 2025-08-07, retrieved 2025-08-07
  9. ^ Yuanhua Lv, Taesup Moon, Pranam Kolari, Zhaohui Zheng, Xuanhui Wang, and Yi Chang, Learning to Model Relatedness for News Recommendation Archived 2025-08-07 at the Wayback Machine, in International Conference on World Wide Web (WWW), 2011.
  10. ^ Richardson, M.; Prakash, A.; Brill, E. (2006). "Beyond PageRank: Machine Learning for Static Ranking" (PDF). Proceedings of the 15th International World Wide Web Conference. pp. 707–715. Archived (PDF) from the original on 2025-08-07. Retrieved 2025-08-07.
  11. ^ "Archived copy". Archived from the original on 2025-08-07. Retrieved 2025-08-07.{{cite web}}: CS1 maint: archived copy as title (link)
  12. ^ Olivier Chapelle; Donald Metzler; Ya Zhang; Pierre Grinspan (2009), "Expected Reciprocal Rank for Graded Relevance" (PDF), CIKM, archived from the original (PDF) on 2025-08-07
  13. ^ Gulin A.; Karpovich P.; Raskovalov D.; Segalovich I. (2009), "Yandex at ROMIP'2009: optimization of ranking algorithms by machine learning methods" (PDF), Proceedings of ROMIP'2009: 163–168, archived (PDF) from the original on 2025-08-07, retrieved 2025-08-07 (in Russian)
  14. ^ Tax, Niek; Bockting, Sander; Hiemstra, Djoerd (2015), "A cross-benchmark comparison of 87 learning to rank methods" (PDF), Information Processing & Management, 51 (6): 757–772, doi:10.1016/j.ipm.2015.07.002, S2CID 22782599, archived from the original (PDF) on 2025-08-07, retrieved 2025-08-07
  15. ^ Burges, Chris J. C.; Shaked, Tal; Renshaw, Erin; Lazier, Ari; Deeds, Matt; Hamilton, Nicole; Hullender, Greg (1 August 2005). "Learning to Rank using Gradient Descent". Archived from the original on 26 February 2021. Retrieved 31 March 2021. {{cite journal}}: Cite journal requires |journal= (help)
  16. ^ Taylor, M.J., Guiver, J., Robertson, S.E., & Minka, T.P. (2008). SoftRank: optimizing non-smooth rank metrics. Web Search and Data Mining.
  17. ^ Burges, Chris J. C. (2025-08-07). "From RankNet to LambdaRank to LambdaMART: An Overview". {{cite journal}}: Cite journal requires |journal= (help)
  18. ^ a b Fuhr, Norbert (1989), "Optimum polynomial retrieval functions based on the probability ranking principle", ACM Transactions on Information Systems, 7 (3): 183–204, doi:10.1145/65943.65944, S2CID 16632383
  19. ^ a b Cooper, William S.; Gey, Frederic C.; Dabney, Daniel P. (1992), "Probabilistic retrieval based on staged logistic regression", Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '92, pp. 198–210, doi:10.1145/133160.133199, ISBN 978-0897915236, S2CID 125993
  20. ^ Bartell, Brian T.; Cottrell Garrison W.; Belew, Richard K. (1994), "Automatic Combination of Multiple Ranked Retrieval Systems", Sigir '94, pp. 173–181, doi:10.1007/978-1-4471-2099-5_18, ISBN 978-0387198897, S2CID 18606472, archived from the original on 2025-08-07, retrieved 2025-08-07
  21. ^ Friedman, Jerome H. (2001). "Greedy Function Approximation: A Gradient Boosting Machine". The Annals of Statistics. 29 (5): 1189–1232. doi:10.1214/aos/1013203451. ISSN 0090-5364. JSTOR 2699986.
  22. ^ Cao, Yunbo; Xu, Jun; Liu, Tie-Yan; Li, Hang; Huang, Yalou; Hon, Hsiao-Wuen (2025-08-07). "Adapting ranking SVM to document retrieval". Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. SIGIR '06. New York, NY, USA: Association for Computing Machinery. pp. 186–193. doi:10.1145/1148170.1148205. ISBN 978-1-59593-369-0.
  23. ^ Xu, Jun; Li, Hang (2025-08-07). "AdaRank: A boosting algorithm for information retrieval". Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. SIGIR '07. New York, NY, USA: Association for Computing Machinery. pp. 391–398. doi:10.1145/1277741.1277809. ISBN 978-1-59593-597-7.
  24. ^ Qin, Tao; Zhang, Xu-Dong; Tsai, Ming-Feng; Wang, De-Sheng; Liu, Tie-Yan; Li, Hang (2025-08-07). "Query-level loss functions for information retrieval". Information Processing & Management. Evaluating Exploratory Search Systems. 44 (2): 838–855. doi:10.1016/j.ipm.2007.07.016. ISSN 0306-4573.
  25. ^ Lin, Jung Yi; Yeh, Jen-Yuan; Chao Chung Liu (July 2012). "Learning to rank for information retrieval using layered multi-population genetic programming". 2012 IEEE International Conference on Computational Intelligence and Cybernetics (CyberneticsCom). IEEE. pp. 45–49. doi:10.1109/cyberneticscom.2012.6381614. ISBN 978-1-4673-0892-2.
  26. ^ Pahikkala, Tapio; Tsivtsivadze, Evgeni; Airola, Antti; J?rvinen, Jouni; Boberg, Jorma (2009), "An efficient algorithm for learning to rank from preference graphs", Machine Learning, 75 (1): 129–165, doi:10.1007/s10994-008-5097-z.
  27. ^ C. Burges. (2010). From RankNet to LambdaRank to LambdaMART: An Overview Archived 2025-08-07 at the Wayback Machine.
  28. ^ Xia, Fen; Liu, Tie-Yan; Wang, Jue; Zhang, Wensheng; Li, Hang (2025-08-07). "Listwise approach to learning to rank: Theory and algorithm". Proceedings of the 25th international conference on Machine learning - ICML '08. New York, NY, USA: Association for Computing Machinery. pp. 1192–1199. doi:10.1145/1390156.1390306. ISBN 978-1-60558-205-4.
  29. ^ Xu, Jun; Liu, Tie-Yan; Lu, Min; Li, Hang; Ma, Wei-Ying (2025-08-07). "Directly optimizing evaluation measures in learning to rank". Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. SIGIR '08. New York, NY, USA: Association for Computing Machinery. pp. 107–114. doi:10.1145/1390334.1390355. ISBN 978-1-60558-164-4.
  30. ^ Taylor, Michael; Guiver, John; Robertson, Stephen; Minka, Tom (2025-08-07). "SoftRank: Optimizing non-smooth rank metrics". Proceedings of the international conference on Web search and web data mining - WSDM '08. New York, NY, USA: Association for Computing Machinery. pp. 77–86. doi:10.1145/1341531.1341544. ISBN 978-1-59593-927-2.
  31. ^ Rong Jin, Hamed Valizadegan, Hang Li, Ranking Refinement and Its Application for Information Retrieval Archived 2025-08-07 at the Wayback Machine, in International Conference on World Wide Web (WWW), 2008.
  32. ^ Massih-Reza Amini, Vinh Truong, Cyril Goutte, A Boosting Algorithm for Learning Bipartite Ranking Functions with Partially Labeled Data Archived 2025-08-07 at the Wayback Machine, International ACM SIGIR conference, 2008. The code Archived 2025-08-07 at the Wayback Machine is available for research purposes.
  33. ^ Leonardo Rigutini, Tiziano Papini, Marco Maggini, Franco Scarselli, "SortNet: learning to rank by a neural-based sorting algorithm" Archived 2025-08-07 at the Wayback Machine, SIGIR 2008 workshop: Learning to Rank for Information Retrieval, 2008
  34. ^ Zhu, Chenguang; Chen, Weizhu; Zhu, Zeyuan Allen; Wang, Gang; Wang, Dong; Chen, Zheng (2025-08-07). "A general magnitude-preserving boosting algorithm for search ranking". Proceedings of the 18th ACM conference on Information and knowledge management. CIKM '09. New York, NY, USA: Association for Computing Machinery. pp. 817–826. doi:10.1145/1645953.1646057. ISBN 978-1-60558-512-3.
  35. ^ Hamed Valizadegan, Rong Jin, Ruofei Zhang, Jianchang Mao, Learning to Rank by Optimizing NDCG Measure Archived 2025-08-07 at the Wayback Machine, in Proceeding of Neural Information Processing Systems (NIPS), 2010.
  36. ^ Sculley, D. (2025-08-07). "Combined regression and ranking". Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. KDD '10. New York, NY, USA: Association for Computing Machinery. pp. 979–988. doi:10.1145/1835804.1835928. ISBN 978-1-4503-0055-1.
  37. ^ Lee, Joonseok; Bengio, Samy; Kim, Seungyeon; Lebanon, Guy; Singer, Yoram (2025-08-07). "Local collaborative ranking". Proceedings of the 23rd international conference on World wide web. WWW '14. New York, NY, USA: Association for Computing Machinery. pp. 85–96. doi:10.1145/2566486.2567970. ISBN 978-1-4503-2744-2.
  38. ^ Ibrahim, Osman Ali Sadek; Landa-Silva, Dario (2025-08-07). "ES-Rank: Evolution strategy learning to rank approach". Proceedings of the Symposium on Applied Computing (PDF). SAC '17. New York, NY, USA: Association for Computing Machinery. pp. 944–950. doi:10.1145/3019612.3019696. ISBN 978-1-4503-4486-9.
  39. ^ Ai, Qingyao; Bi, Keping; Jiafeng, Guo; Croft, W. Bruce (2018), "Learning a Deep Listwise Context Model for Ranking Refinement", The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 135–144, arXiv:1804.05936, doi:10.1145/3209978.3209985, ISBN 9781450356572, S2CID 4956076
  40. ^ Davidov, Ori; Ailon, Nir; Oliveira, Ivo F. D. (2018). "A New and Flexible Approach to the Analysis of Paired Comparison Data". Journal of Machine Learning Research. 19 (60): 1–29. ISSN 1533-7928. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  41. ^ Pfannschmidt, Karlson; Gupta, Pritha; Hüllermeier, Eyke (2018). "Deep Architectures for Learning Context-dependent Ranking Functions". arXiv:1803.05796 [stat.ML].
  42. ^ Fatih Cakir, Kun He, Xide Xia, Brian Kulis, Stan Sclaroff, Deep Metric Learning to Rank Archived 2025-08-07 at the Wayback Machine, In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  43. ^ Ai, Qingyao; Wang, Xuanhui; Bruch, Sebastian; Golbandi, Nadav; Bendersky, Michael; Najork, Marc (2019), "Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks", Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, pp. 85–92, arXiv:1811.04415, doi:10.1145/3341981.3344218, ISBN 9781450368810, S2CID 199441954
  44. ^ Rolínek, Michal; Musil, Vít; Paulus, Anselm; Vlastelica, Marin; Michaelis, Claudio; Martius, Georg (2025-08-07). "Optimizing Rank-Based Metrics with Blackbox Differentiation". 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 7617–7627. arXiv:1912.03500. doi:10.1109/CVPR42600.2020.00764. ISBN 978-1-7281-7168-5.
  45. ^ Vlastelica, Marin; Paulus, Anselm; Musil, Vít; Martius, Georg; Rolínek, Michal (2025-08-07). "Differentiation of Blackbox Combinatorial Solvers". arXiv:1912.02175. {{cite journal}}: Cite journal requires |journal= (help)
  46. ^ Liu, Weiwen; Liu, Qing; Tang, Ruiming; Chen, Junyang; He, Xiuqiang; Heng, Pheng Ann (2025-08-07). "Personalized Re-ranking with Item Relationships for E-commerce". Proceedings of the 29th ACM International Conference on Information & Knowledge Management. CIKM '20. Virtual Event, Ireland: Association for Computing Machinery. pp. 925–934. doi:10.1145/3340531.3412332. ISBN 978-1-4503-6859-9. S2CID 224281012. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  47. ^ Pang, Liang; Xu, Jun; Ai, Qingyao; Lan, Yanyan; Cheng, Xueqi; Wen, Jirong (2020), "SetRank", Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 499–508, doi:10.1145/3397271.3401104, ISBN 9781450380164, S2CID 241534531
  48. ^ Swezey, Robin; Grover, Aditya; Charron, Bruno; Ermon, Stefano (2025-08-07). "PiRank: Scalable Learning To Rank via Differentiable Sorting". Advances in Neural Information Processing Systems. NeurIPS '21. 34. Virtual Event, Ireland. arXiv:2012.06731.
  49. ^ Fuhr, Norbert (1992), "Probabilistic Models in Information Retrieval", Computer Journal, 35 (3): 243–255, doi:10.1093/comjnl/35.3.243
  50. ^ Manning C.; Raghavan P.; Schütze H. (2008), Introduction to Information Retrieval, Cambridge University Press. Sections 7.4 Archived 2025-08-07 at the Wayback Machine and 15.5 Archived 2025-08-07 at the Wayback Machine
  51. ^ Jan O. Pedersen. The MLR Story Archived 2025-08-07 at the Wayback Machine
  52. ^ U.S. patent 7,197,497
  53. ^ "Bing Search Blog: User Needs, Features and the Science behind Bing". Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  54. ^ Yandex corporate blog entry about new ranking model "Snezhinsk" Archived 2025-08-07 at the Wayback Machine (in Russian)
  55. ^ The algorithm wasn't disclosed, but a few details were made public in [1] Archived 2025-08-07 at the Wayback Machine and [2] Archived 2025-08-07 at the Wayback Machine.
  56. ^ "Yandex's Internet Mathematics 2009 competition page". Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  57. ^ "Yahoo Learning to Rank Challenge". Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  58. ^ Rajaraman, Anand (2025-08-07). "Are Machine-Learned Models Prone to Catastrophic Errors?". Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  59. ^ Costello, Tom (2025-08-07). "Cuil Blog: So how is Bing doing?". Archived from the original on 2025-08-07.
  60. ^ "How Bloomberg Integrated Learning-to-Rank into Apache Solr | Tech at Bloomberg". Tech at Bloomberg. 2025-08-07. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  61. ^ "Learning to Rank for Amazon OpenSearch Service - Amazon OpenSearch Service". docs.aws.amazon.com. Retrieved 2025-08-07.
  62. ^ "Elasticsearch Learning to Rank: the documentation — Elasticsearch Learning to Rank documentation". elasticsearch-learning-to-rank.readthedocs.io. Retrieved 2025-08-07.
  63. ^ a b Zhou, Mo; Niu, Zhenxing; Wang, Le; Zhang, Qilin; Hua, Gang (2020). "Adversarial Ranking Attack and Defense". arXiv:2002.11293v2 [cs.CV].
  64. ^ Li, Jie; Ji, Rongrong; Liu, Hong; Hong, Xiaopeng; Gao, Yue; Tian, Qi (2019). "Universal Perturbation Attack Against Image Retrieval". International Conference on Computer Vision (ICCV 2019): 4899–4908. arXiv:1812.00552. Archived from the original on 2025-08-07. Retrieved 2025-08-07.
  65. ^ Madry, Aleksander; Makelov, Aleksandar; Schmidt, Ludwig; Tsipras, Dimitris; Vladu, Adrian (2025-08-07). "Towards Deep Learning Models Resistant to Adversarial Attacks". arXiv:1706.06083v4 [stat.ML].
[edit]
Competitions and public datasets
过生日吃什么 口腔溃疡吃什么药好的快 什么蔬菜补钾 姓毛的男孩取什么名字好 得艾滋病的人有什么症状
什么是末法时代 三伏天是什么时候开始 来例假肚子疼吃什么药 27年属什么生肖 什么面不能吃
喜丧是什么意思 阴道口发白是什么原因 胃难受是什么原因 脂蛋白高说明什么问题 胃暖气是什么症状
避孕套有什么危害 四海扬名是什么生肖 狡兔三窟是什么生肖 普通感冒吃什么药 榴莲什么人不适合吃
潜水是什么意思hcv8jop8ns9r.cn 10.1什么星座hcv9jop0ns8r.cn 皇帝自称什么hcv7jop5ns2r.cn 耳膜破了是什么感觉hcv7jop9ns0r.cn 吃什么能让奶水变多hcv8jop1ns0r.cn
龟头炎用什么药治疗hcv8jop3ns3r.cn 白带是什么jingluanji.com 呼吸困难是什么原因引起的creativexi.com 查甲状腺挂什么科hcv7jop9ns4r.cn 来月经吃什么排得最干净hcv9jop7ns2r.cn
天地始交是什么意思gangsutong.com 77年五行属什么hcv8jop4ns9r.cn 喝茶拉肚子是什么原因hcv8jop7ns0r.cn 6个月宝宝可以吃什么水果hcv8jop4ns5r.cn 大便少是什么原因hebeidezhi.com
骨折喝什么汤恢复得快hcv8jop3ns0r.cn 2月19日什么星座hcv8jop4ns7r.cn 烧心是什么原因造成的hcv8jop8ns5r.cn 睡觉老是做梦是什么原因hcv8jop1ns3r.cn 4月8号什么星座hcv8jop3ns7r.cn
百度