A Comparative Study between a Novel Deterministic Test for Mersenne Primes and the Well-Known Primality Tests

: In this article, we propose a new deterministic primality test for the Mersenne numbers 2 𝑛 − 1 which is introduced by the Hindi Awad test (HAT). The idea of this test is related to that of Pepin’s primality test for Fermat numbers 2 2 𝑛 + 1 . In addition, a modification to solve the weaknesses in the Selfridge-Lucas Test (SLT) is presented and used to suggest a new modified test called Hindi Selfridge-Lucas test (HLT) with the help of base 3. Finally, a comparative study between some well-known primality tests and the new test is done in order to identify and classify them from the least to the most powerful and reliable tests according to their level of strength, speed, and effectiveness based on the results obtained through programs prepared and operated by Mathematica where the results are presented through tables and graphs.


Introduction:
Prime numbers have occupied their significance since the beginning of civilization because they form the building blocks of whole numbers. Even today, many researchers try to understand their analogs since there is no valid formula to generate them, and their distribution is still considered mysterious which forms a big puzzle for all researchers and scientists. A primality test is a method used to determine whether an input natural number is prime or composite using some number theoretic rules and theorems. Primality testing is mostly used in the fields of cryptography and cybersecurity 1,2 . In general, primality tests are different integer factorization because they only state whether a number is prime or not without giving its prime factors of it. In addition, primality testing is considered one of the oldest fundamental problems in mathematics, and it becomes more and more important due to its applications in cryptography such as network cyber security 3, 4 . There are two types of primality tests, deterministic and probabilistic tests. On one hand, a primality test is deterministic if its output is "True" when the number is a prime, yet it is "False" when the input is composite with a hundred percent probability 5 . On the other hand, the primality test is probabilistic which is often called a pseudoprimality test. Furthermore, each primality test has its properties, and it can be applied only to special types of numbers and special algebraic structures. There are many primality tests that can be found in the literature; they are classified according to their algebraic structure and accuracy 3,6,7 .
In this paper, a comparative study is presented in order to point out the most important and efficient well-known primality tests. In addition, two new approaches for primality tests are introduced: The first approach is the Hindi Awad test (HAT) which is used to test the primality of Mersenne numbers. Its idea is related to that of Pepin's primality test for Fermat numbers. The second approach is the Hindi-Selfridge Lucas test (HLT) which hunts the Lucas pseudoprimes by Lucas sequences with special parameters.

Well-Known Primality Tests:
There is a huge set of strategies and methods which are valid to check and verify the primality of a given positive number based on given algebraic structures. They are classified as either probabilistic or deterministic tests. In the following, the most important and widely used primality tests are presented. For more details, one can see 8 and the references therein.

The Probabilistic Tests
Probabilistic primality tests are algorithms used to output whether an input number is prime or not within a certain probability of error. In this type of primality testing, the algorithm typically picks a random number called (witness) and verifies some criteria involving the tested number. Most probabilistic primality tests declare a witness to be either a definitely composite or a probable prime. A composite number that erroneously passes such a test is called a pseudoprime. There are many well-known probabilistic primality tests for any odd positive integer n that are widely used. The following theorems can be found in 9, 10 . Theorem 1: (Fermat's Test -FT) If there exist ∈ * such that ≢ ( ), then n is composite. The weakness of FT is due to the presence of the pseudoprimes (Carmichael numbers), and its probability error is less than 50% with a running time of Õ( 2 ). For more information, one can see 10, and the references therein. ) for all ∈ , then n is composite. MRT is also a probabilistic test based on Fermat's Little Theorem with the help of the existence of non-trivial square roots in . Its weakness is due to some strong pseudoprimes that may be reported, and its running time is of order ( log 3 ) with a probability error of less than 25%. ) and 2 > + , then n is prime. It is noted that PT and PGT are probabilistic primality tests where the first is based on the Pocklington criterion and the second is based on the computation of the cyclotomic polynomials.

The Deterministic Tests
A primality test is deterministic if its output is "True" when the number is prime and "False" when the input is composite with absolute certainty.
The proofs of the following theorem and corollary can be found in 11 . ). Thus, 19 is Lucas probable prime. Now, in order to modify the above method, P and Q should be chosen more effectually and rapidly such that ( ) = −1. The first method is proposed by Selfridge when = = 1 11 . It is based on skipping −3 from the odd numbers as a consequence of the appearing periodic results first, then selecting to be the first element in   It is well known that the results obtained by the Selfridge-Lucas test (SLT) are weak. The SLT cannot be considered a good deterministic method for hunting primes because some composite numbers (Lucas pseudoprimes) satisfy Corollary 1. This study suggests a new approach that can be used to solve this problem by doing some modifications on SLT to present a new modified test called Hindi Selfridge Lucas test (HLT) with the help of FLT using base 3. Also, the selection of the Lucas sequence criteria and are based on Selfridge's method 11,12 .
Proof: Assume that | ( , ). Then, = 2 − 4 = 2 − 4( ) ≡ 2 ≢ 0 ( ) which implies that ∤ and ∤ . Consider the sequence = −1 − −2 with 0 = 0 and 1 = 1. Then, by induction on ≥ 2, it is obtained that ∤ for all ≥ 1 which is a contradiction. Thus, ( , ) = 1.  Theorem 7: (Hindi Selfridge-Lucas Test -HLT) An odd number > 11 is prime if | +1 with ∤ +1 such that 3 −1 ≡ 1 ( ) by using the Selfridge method for the selection of and . Proof: Suppose that is an odd composite number, then is either Luca's pseudo prime or a Carmichael number. Hence, | +1 with ( ) = −1 and by Selfridge method = = 0 whenever ( ) ≠ −1. This proves that has at least one factor. Thus, from Lemma 1 we obtain that gcd( , 2 ) = 1 and does not satisfy the FLT. Now, define the function ( ) with > 1 for = ∏ =1 as follows: . So that, ( ) = ( ) and thus n is prime which is a contradiction. Case 1: if > 1 and = , then ( ) = is not a multiple of . It follows that, ( ) = − −1 ( ) and note that, ± 1 is a factor of ( ) and this is impossible. Hence, is prime. On the other hand, we have  Hence, = 1829 is a Lucas pseudoprime. But, 3 1829−1 ≢ 1 ( 1829) which implies that = 1829 is not prime. Remark 1: The overall time complexity of the HLT approach is twice the time needed for the computation of ( ). So, by using matrix representation 11,12 , it is acquired that the time complexity is of order ( 4 log 3 ). This result is tested on all numbers until 800,000 digits and none of them satisfies Theorem 7.

Baillie-PSW Primality Testing -PSWT
This test has been presented in 1980 by Baillie, Pomerance, Selfridge, and Wagstaff known as the BPSW or BSW test 13,14 . The process of this test begins with the trial division test which checks for small prime divisors < 1000, then continues with the Miller-Rabin test and terminates with the Lucas sequence test using either Baillie's method or Selfridge's method for selecting , , and (see 15 ).
The trial division test is an easy test proposed by Fibonacci and its idea is based on the following essential theorem 5 .

Theorem 8: (Trial Division Test) A positive integer
n is said to be composite if it has a prime divisor ≤ √ . where and are greater than 1.

AKS Primality Test
This method is the newest deterministic polynomial algorithm for primality testing which appeared in 2002 and has been suggested by Agrawal, Kayal, and Saxena 16 . It is based on a generalization for Fermat's Little Theorem 16,17 .
Lemma 3 is not efficient and not practical to use as a primality test due to its huge running time during the evaluation of ( + ) ( ). To eliminate the polynomials of higher-degree, it is suggested to use the n th -degree cyclotomic monic polynomials. This leads to introducing of the double modulo notation in the polynomial congruency class in [ ]/(ℎ( )) where ℎ( ) is a monic irreducible polynomial. Definition 2: Let be a ring and let Hence, if ( ) ∈ [ ] is an arbitrary monic polynomial, then ( + ) ≡ ( + ) ( ( ), ) for every integer which leads to a rapid check if the ( ( )) is not too large. In the following, denote by ( , , )( ) ( + ) − − ≡ 0 ( − 1, ) with ≤ and ( , ) = 1.

Definition 3: A positive integer
is called the perfect power of if = where and are greater than 1.
The AKS primality test is based on the following theorem found in 17 , and its proof can be found in 16  is prime by the AKS test. However, if = 561, then (7, 561, 3)( ) = ( + 7) 561 − 561 − 7 ≢ 0 ( 3 − 1, 561). Hence, = 561 is not prime. In addition, the AKS team left a conjecture which reduces the number of steps in the computation process 16 . Conjecture 1: If is a prime number such that ∤ and if (−1, , )( ) ≡ 0 ( − 1, ), then is either prime or 2 ≡ 1 ( ). Remark 2: If Conjecture 1 is valid, then a small method can be modified for suitable ∈ [2,4 ] such that ∤ 2 − 1 and of order ( log 2 ) for the congruence computation steps. Thus, the overall complexity is of order (log 3 ). In addition, it is obtained that searching for can be excluded by using the following new conjecture.

Analysis of the AKS Test
To analyze the correctness of the AKS algorithm presented and proved in 16 , we have to use the following theorem presented in 18 . Theorem 11: AKS algorithm returns "True" if and only if is prime.
The demonstration of the correctness of the AKS algorithm starts by using Theorem 10 by verifying that is not a proper power. Then, the algorithm is used in order to find for checking whether has a factor over the interval [2, √ ( ) log ] or not. If so, then the algorithm reports "False". Otherwise, the last step is performed by checking the binomial congruence that must hold for all in [1, √ ( ) log ] in case is prime. The next scenario is about even if Theorem 10 holds for all ∈ [2, √ ( ) log ] and whether has prime factors > √ ( ) log or not. Consequently, must be proper power which is already checked in the first step. If so, must be prime.
The analysis of this method is continued by selecting a suitable value for r which must be bounded in a polynomial time of order ( ). The proof of the following lemma can be found in 18,19 .

Lemma 4:
Let be a positive odd number. Then, there exists a prime number ∤ such that ≤ ⌈16 log 5 ⌉ and ( ) > 4 log 2 .
In the literature, there are some improvements for the AKS in order to reduce its complexity time by choosing the suitable value of (see 18 ). Lenstra 20 has changed the bound for appropriate by reducing its bound to ( ) > 4 log 2 . Despite that, the algorithm is still inefficient since its complexity is still exponential. After that, the time complexity for choosing the appropriate has been reduced by using the following theorem (see Cao 19 ). Theorem 12: (AKS-Bernstein test 21 ) Assume that and are prime numbers such that |( − 1) and If has no prime factor less than and ( , , ) ≡ 0 ( − 1, ) for all = 0, 1, … , -1, then is a perfect prime power.
The above theorem is hypothetically more efficient because the powers for any integer can be obtained using Newton's iterations by solving − = 0 which is achieved in a polynomial time. But, the binomial congruence can be processed in ( log 2 ) steps using the Fourier transformations algorithm. Furthermore, it is obtained that the binomial congruence can be summarized in only two steps with the use of Theorem 12. In the following, a conjecture is exhibited which may enhance the behavior of the AKS test.

Primality Testing for Special Numbers
In this part, a discussion of three different primality tests is presented for those numbers of the form = 2 + where , , , and = 2 are positive integers with < 2 . These tests are the Lucas-Lehmer test, Proth's test, and the new primality testing for Mersenne numbers (HAT).

HAT For Mersenne Numbers
The HAT is a novel approach for primality testing of Mersenne numbers. The idea of this test is the same as that of Pepin's primality testing for Fermat numbers. ). The sufficient condition is a direct application of Fermat's Little Theorem. However, the necessary condition needs an algebraic construction to be proven. A simulation has been done for around 900,000-digit Mersenne numbers using Mathematica and it has proved to be perfect in its outcomes and output running time compared to the Lucas-Lehmer test.

Comparative Study:
This study is based on computing the area under the smooth cubic spline interpolated curve where each ( , ( )) value is determined. Then, a comparison of the graphs of the curves is done by comparing the areas under the curves to decide the most and the least powerful primality tests. This study is done by selecting random primes in the intervals = [10 , 10 +1 ] where ∈ + . Then, running time (time required for each test) is taken down for each chosen input by using the Mathematica built-in function AbsoluteTiming [.]. In addition, the study is divided into three parts according to the natural property and algebraic structure of the primality test under the study. Also, each part is divided into many branches depending on a given scale after changing the size of to be , = [10 , 10 +1 ] for ∈ + , and is fixed which is entered by the end user. Finally, the collected data are represented in a graph to point out the results.

Probabilistic Primality Tests
The comparative analysis of this study is based on the computation of norms for smooth functions by determining the area under the curve using cubic splines interpolation. This analysis is considered a good reference because it aims at ordering the smooth curves as well as comparing them. First, a simulation is done using the deterministic algorithms (FT, SST, MRT) with a modification on the base = ( ( − 1)) + 1, where is a fixed positive integer for both; the SST and the MRT. In addition, new modified tests called the SST* and the MRT* are performed to determine the errors based on the percentage of the pseudoprimes which may show up. This simulation is done on random bases ∈ [2, ] with > 10 6 and by the help of the Mathematica function PrimePi[.].Then the Pseudoprime average for each algorithm is determined and the results are summarized and presented as shown in Fig. 1. This proves that, even if the bases are special, there will be a high percentage of errors in reporting the composite numbers as primes. Moreover, it can be noticed that the MRT is the most powerful algorithm relative to pseudoprimes.    Based on Fig. 5, it is noticed that HLBT is the least powerful primality test with an exponential shape. However, PSWT is the most powerful test with log 2 shape. In order to be more precise in arranging the primality tests, the researchers used both 1 and ∞ norms. The results are collected in a table after each simulation. Then, the area under the curve is measured by using the ∞ norm in order to check how much time each primality test takes. This process is repeated for those with the least powerful results. The results obtained show that HLBT is the least powerful primality test while the PSWT test is the most powerful one during this experiment. Also, FT is considered one of the least powerful (slowest) tests, whereas the MRT is considered the most powerful (fastest) test in the case of the randomized category algorithm. In addition, to get rid of all the doubts about HLBT and MRT, 1 norm is used on continuous subintervals where all the curves have the same endpoints. The results are shown in Fig. 6. Although the numerical approach of the 1 is so exhausting and uncertain for the computer to give outputs when the table consists of more than     Hence, it is obtained that MRT is better than SST according to its accuracy, smoothness, and rapidness. In addition, it can be observed that PSWT is the most powerful primality test, yet SST and MRT are so not accurate and can be considered the least powerful primality tests in doing their task. To be more specific and accurate, the same study is repeated but with different fixed variable k which is added after observing that the computations of the 1 norms are demanding and exhausting to the computer which yields using new subintervals , . In this way, the speeding up of the experiment becomes more powerful in investigating and testing new primes. Moreover, the shape of the timing curve becomes clearer and smoother. The analysis starts by taking = 100 and the endpoint of the interval is 10000. In addition, in Table 1, the ∞ norm is used in 7 rounds, where in each round the least powerful test is skipped from the list and a new round is done with the remaining tests. Then, in the new round, the least powerful from the new list of tests are skipped from the list, and so on. After seven successive rounds, the tests are arranged from the least powerful test to the most powerful one. In addition, the frequency of each curve in each round is computed by using the formula % = max max × 100, and the test of the least frequency in such round is skipped in the second round, and so on. Based on Table 1, it can be claimed that HLBT is the least powerful primality test however PSWT is the most powerful one. Moreover, if computations of 1 are continued for the other primality tests, Fig. 11 is obtained. To sum up, the first part of this analysis is done when = 100, and from Fig. 11, it can be declared that PSWT is the most powerful and effective algorithm for primality testing, while MRBT and FT are the least powerful and exhausting algorithms since they consume lots of time. In addition, even if the binary representation for as the method is used, it is no longer helpful and effective for HLBT and MRBT. Now, if = 5000 with an interval endpoint which is 20000 is taken, then Fig. 12 is obtained. From the results which have been obtained in Figs.11 and 12, it can also be declared that FT is the least powerful primality test and PSWT is the most powerful one. Moreover, MRT dominates SST in the different area 13464.8 2 . Finally, it can be deduced that PSWT is the most reliable, and straightforward primality test, while HLT and HLBT are the least reliable and demanding primality tests.

AKS Primality Test
The AKS primality test is removed from this study because of the following reasons: First, technically it requires a large space in the memory of the computer during the computation process which is of polynomial congruency 1 . Second, there are insufficient improvements for reducing the demanding iterations for the computer speed and memory. Finally, its curve is unclear compared with the curves of the other primality tests such as FT, SST, MRT, and PSWT (see Fig. 13).

Mersenne Primality Tests
In the following section, a study for Mersenne primality tests is done. In fact, a survey about which primality test is the most reliable in hunting Mersenne primes is performed. So, all the primality tests in this study are processed in order to hunt the Mersenne numbers less than 895932. In Table 2, the ∞ norm is used as in Table. 1 but it took nine successive rounds to arrange them from the least powerful test to the most powerful one. From the results in Table. 2, it can be observed that the least powerful algorithm in hunting Mersenne primes is HLBT, whereas the most powerful one is HAT. So, the study is focused only on PSWT, LLT, and HAT. It may be inferred that HAT (new approach) is the most reliable primality test for Mersenne primes. In addition, the 1 norm test is used for more accuracy, and the results are shown in Fig. 14 From the results in Fig. 14, HLBT certainly has the largest area compared with the areas of other tests, and as such HLBT, FT, and SST are not the good least powerful tests for Mersenne primes. However, LLT and HAT are the more reliable ones. Now, if the width of the interval is extended to 420921, it is obtained that PSWT is a powerless test and consumes lots of time compared with HAT and LLT, whereas HAT is the most powerful one in this interval. Based on the mentioned observation, PSWT and HLT are not reliable, deficient, and affect negatively the study. So, these tests are skipped from the study, and the results are shown in Figs.15-17.  The experiment continues normally and smoothly for more than two weeks without stopping after digit 895932 until a termination in the program occurs without any noticed output. In addition, from Fig. 17, it is clear that LLT exceeds HAT by an area equal to 1.90468 × 10 7 2 . Hence, it can be declared that LLT and HAT are the most powerful Mersenne primality tests.

Proth's Primality Test
In this part, Proth's primality test is studied to find Proth's prime numbers = 2 + 1, where = ∑ 2 =19 . As done in Tables 1, and 2, the ∞ norm is used in nine successive rounds to arrange the tests from the least powerful test to the most powerful one. The results are presented in Table. 3, and Figs.18-20 are shown below.
From Table 3 it can be obtained that HLBT is the least powerful primality test, while PT and PSWT are the most powerful primality tests. To be more precise, the 1 norm is used to arrange the mentioned primality tests from the least to the most powerful tests. The obtained results are presented in Figs.18-20.   Therefore, PSWT is the most powerful and reliable primality test for any number.

Conclusion:
Throughout this study, it is proved numerically that PSWT is the most powerful and reliable primality test on any number. Moreover, MRT is the least powerful and unreliable primality test algorithm in the randomized algorithms for any input. In conclusion, it is declared that LLT, HAT, and PT are the most powerful and reliable primality tests depending on specific inputs. However, it cannot be predicted what will happen for LLT and HAT if the interval is extended to the largest discovered Mersenne primes.