I'm wildly ranking / scoring differently between SQL and Lucene, given the following question:
[Pseudo code] (* statut * in 3 of lectures) and contradiction
I do not think this is an issue with the parser because all the results seem to conform to query requirements However, in the top 1000 of the results , I get only 172 general results. Since all the results of both Lucene and SQL seem to conform to both query requirements, so my only remaining estimate is that scoring is somehow fundamentally different. I was having trouble finding any information about how to handle SQL scoring, or not comparing SQL and Lucene scoring. I do not necessarily expect the same results to be set from two engines, but I was expecting more than 10% equality and I need to explain at least the huge discrepancy.
How can I explain (Zor Khan) to:
Text search in full-SQL server can generate an optional score (or rank value) that Displays the relevance of the data obtained from the full text query. This rank value is calculated on every line and can be used as the order criteria to sort the result set of queries given on the basis of relevance. Rank values only indicate a relative order of relevance of rows in the set of results. Real values are unimportant and running queries is usually different. .
She said, this is not a real value for SQL full text search locations results ; The result is the only value in relation to the other rows.
Compare that, how do you indicate the documents, whether they are expanding documents and / or areas, filters, etc.
Scoring in Lucene is also consistent, unlike SQL Server, where there is no guarantee, it is also reflected on the name, the result of the full-text query in SQL Server rank value Happens and no score, as if it is in Lucin.
Values are not completely comparable, but it makes sense, because results will not be identical.
Comments
Post a Comment