The Rabin-Karp algorithm, also known as the Karp-Rabin algorithm, was introduced by Richard M. Karp and Michael O. Rabin in 1987. It is designed to address the multiple pattern string matching problem.
Its implementation is somewhat unconventional. It begins by computing the hash values of two strings and then determines whether there is a match by comparing these hash values.
Algorithm Analysis and Implementation
Choosing an appropriate hash function is crucial. Assuming the text string is , and the pattern string is , where , let represent the hash value of the substring .
When , it is natural to compare . In this process, if we recalculate the hash value for the substring , it would require time complexity, which is not cost-effective. Observing that there are m-1 overlapping characters between the substrings and , we can use a rolling hash function. This reduces the time complexity of recalculation to .
The rolling hash function used in the Rabin-Karp algorithm primarily leverages the concept of Rabin fingerprint. For example, the formula to calculate the hash value of the substring is as follows:
Here, is a constant. In Rabin-Karp, it is generally set to 256 because the maximum value of a character does not exceed 255. The formula above has an issue - hash values could overflow. To address this, we take the modulus, and the value should be as large as possible and preferably a prime number. Here, we take 101.
The formula to calculate the hash value of the substring is then:
The complete code is as follows:
The output is as follows:
Complexity Analysis
Let's examine the space complexity first, which is easily determined: .
Now, consider the time complexity. Let the length of the text string be n and the pattern string be m. Preprocessing requires , and during matching, in the best case where there are no hash collisions, . In the worst case, where there is a collision every time, . In practical scenarios, n is often much larger than m, so the final complexity table is:
Application Analysis
The primary application of the Rabin-Karp algorithm is in plagiarism detection for articles, such as the detection system used by Semantic Scholar.
However, from the complexity data above, the Rabin-Karp algorithm does not seem to have a significant advantage. Is it practical for detecting text plagiarism? Feedback from actual usage results indicates that the time complexity for plagiarism detection is only . I believe this is mainly due to the following two points:
- In real-life articles, the text data does not often exhibit as many hash collisions as we might imagine.
- The original content in a submitted article is likely to be much larger than the plagiarized content. In other words, successful matches do not occur as frequently as we might imagine.