Cryptanalysis

Software/analytical:

Project CS-1

Title: Cryptanalysis of "proprietary" ciphers included in WordPerfect, Lotus 1-2-3, etc., based on Vigenere cipher

Description:

Bruce Schneier in his book "Applied cryptography" states that if a software security program proclaims that it has a "proprietary" encryption algorithm - significantly faster than DES - the odds are that it is some variant of Vigenere polyalphabetic cipher. Cryptanalysis of the Vigenere cipher is believed to take only few seconds with a computer. The algorithm for breaking the cipher is based on so called "index of coincidence," and is very well described in the literature. Your task is to implement this algorithm, test it, and then try to apply your program to files encrypted using Word Perfect, Lotus 1-2-3, etc. Additional task would be to create an updated list of applications such as Word Perfect, etc. that still use a "proprietary" algorithm based on Vigenere cipher (see the catalog of Access Data [2] for a couple of likely candidates).

Literature:

  1. B. Schneier, "Applied cryptography," chapter 1.4, pp. 13-15.
  2. Access Data, catalog of products; to order call (800) 658-5199 or (801) 224-6970, fax (801) 224-6009, or write Access Data 560 So. State St. Ste. J1, Orem, Utah 84058.
  3. D. R. Stinson, "Cryptography - Theory and Practice," chapter 1.2.3, pp. 31-36.
  4. A. Sinkov, "Elementary cryptanalysis," Mathematical Association of America, 1966.
  5. H. A. Bergen and W. J. Caelli, "File Security in Word Perfect 5.0," Cryptologia, vol. XV, no. 1, pp. 57-66.
Prerequisites: C or Pascal in PC or UNIX environment
 

Project CS-2

Title:Timing cryptanalysis of RSA and other public key cryptosystems.

Description:

In December 1995, Paul C. Kocher, a 22-year old independent cryptography consultant from Stanford, CA, released a draft version of his paper warning vendors and users of two most widespread public key cryptoalgorithms, RSA and DSS, about the attack he devised. The attack works under the condition that the intruder can measure the amount of time used to complete cryptographic operations. Extra computations necessary to mount the attack take time proportional to the number of bits of the operands, leading to a full recovery of a secret (private) key in less then one hour for practical operand lengths. Paul is working on a complete implementation of his idea, but he already presented his preliminary results at the RSA Data Security and CRYPTO conferences in 1996. His attack got the recognition of the most respected cryptographers, including the authors of the RSA cryptosystem. The countermeasure eliminating the threat of this attack is currently being added to software products sold by RSA Data Security Inc., it is however not obvious how to protect all implementations of the RSA and DSS against timing attacks. Your task is to implement the attack for a selected implementation of the RSA available in a public domain (RSAREF library, PGP, or PEM) on the basis of Kocher's preliminary papers. Additional tasks could be to analyze possible countermeasures, and find the dependence between the difficulty of the attack and the algorithm for modular multiplication used in the implementation of the cryptoalgorithm.

Literature:

  1. Paul C. Kocher, "Cryptanalysis of Diffie-Hellman, RSA, DSS, and Other Systems Using Timing Attacks," extended abstract, December 1995.
  2. J. Markoff, "Secure digital transactions just got a little less secure," New York Times, December 11, 1995.
  3. B. Kaliski, "Timing Attacks on Cryptosystems," RSA Laboratories' Bulletin, no 2, January 23, 1996.
  4. Paul C. Kocher, "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems," Proc. CRYPTO'96, pp. 104-113.
  5. RSAREF library, available via ftp from ftp://ftp.rsa.com.
You can also contact Paul Kocher directly:

e-mail: pck@cryptography.com

address: P.O. Box 8243, Stanford, CA, 94309, USA

Prerequisites: C in UNIX environment, basics of probability theory
 

Project CS-3

Title: Factoring large integers using Quadratic Field Sieve or Number Field Sieve and comparison with Lenstra's implementation of factoring over the Internet - 2U or 1-2G.

Description:

Security of the RSA cryptosystem is based on the difficulty of finding prime factors of large numbers. Therefore, RSA Data Security, Inc., which holds a patent for RSA, organizes a Factoring Challenge. Fifty one integer numbers, in the range from 100 to 500 decimal digits, were generated by choosing two primes at random and multiplying them together, according to the rules used in the RSA cryptosystem. Prime factors were afterwards destroyed. Anybody, who is able to factor a smallest (!) unfactored number from the list, receives an award. If there are no successful attempts within a given quarter, the award accumulates. The largest number from the list factored so far had 130 decimal digits. Computations are performed simultaneously by many volunteers from all over the world who use spare time of their machines to complete initial, most time-consuming, phase of the algorithm. Obtained partial results are send over the Internet to one place, where the final result is calculated. Two fastest algorithms devised to date are: Quadratic Field Sieve (QFS) and Number Field Sieve (NFS). The latter is more efficient for numbers greater than 100 decimal digits, but its idea and implementation is more complicated. Your task is to get familiar with one of these algorithms and implement it. An additional task is to get familiar with public domain implementation of the program for factorization of large numbers, and associate different parts of its code with particular phases of the algorithm.

Literature:

Basic:

  1. A. M. Odlyzko, "The Future of Integer Factorization," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 5-12.
  2. Information about "RSA Factoring Challenge," obtained by sending e-mail to challenge-administrator@rsa.com.
  3. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapters 3.2.6-3.2.7, pp. 95-98.
  4. D. M. Bressoud, "Factorization and Primality Testing," Springer-Verlag, 1989, chapter 8, pp. 102-119.
  5. J. A. Davis, D. B. Holdridge, and G.J. Simmons, "Status Report on Factoring," Proc. CRYPTO'84, pp. 198-215.
  6. C. Pomerance, "The number field sieve," Mathematics of Computation 1943-1993: A Half-Century of Computational Mathematics," Amer. Math. Soc., pp. 465-480.
Supplementary:
  1. A.K. Lenstra and M.S. Manasse, "Factoring by electronic mail," Proc. CRYPTO'89, pp. 355-371.
  2. T. Denny, B. Dodson, A.K. Lenstra, M.S. Manasse, "On the factorization of RSA-120," Proc. CRYPTO'93, pp. 166-173.
  3. D. Atkins, M. Graff, A.K. Lenstra, and P.C. Leyland, "The magic words are squeamish ossifrage," Proc. ASIACRYPT'94, pp. 263-277.
  4. B. Dodson and A.K. Lenstra, "NFS with four large primes: An explosive experiment," Proc. CRYPTO'95, pp. 372-385.
  5. A.K. Lenstra, H.W. Lenstra, Jr., "The Development of the Number Field Sieve," Springer-Verlag, 1993.
Prerequisites: C or Pascal in PC or UNIX environment
 

Hardware:

Project CH-1

Title: RC-5 breaking machine.

Description:

RC5 is a new block cipher devised by Ron Rivest - one of the inventors of the RSA cryptosystem - as an alternative for the old American standard DES. The cipher has a variable key size, and a variable input/output block size. Variable key length permits RC5 to be exported abroad, but only under the condition that the key size is less or equal to 40 bits. Such a small key length, may make the cipher vulnerable to the exhaustive key search attack. Your task is to design at the logic level the RC-5 breaking machine, and to estimate the number of chips that are necessary to break a single key within one hour. You can follow the design and cost estimates for the DES breaking machine devised by Michael Wiener [5]. Basic operations of the RC5 are: XOR, rotation, and addition.

Literature:

  1. B. Schneier, "Applied cryptography," chapter 14.8, pp. 344-346.
  2. R. L. Rivest, "The RC5 Encryption Algorithm," RSA Laboratories' CryptoBytes, vol. 1, no. 1, spring 1995, pp. 9-11.
  3. B. Kaliski and Y. L. Yin, "On the Security of the RC5 Encryption Algorithm," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 13-14.
  4. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 7.7.2, pp. 269-270.
  5. M.J. Wiener, "Efficient DES Key Search," Technical Report TR-244, School of Computer Science, Carleton University, May 1994.
Prerequisites: logic level design within Cadence, Altera or other CAD environment.

Implementation of cryptoalgorithms

Software/analytical:

Project IS-1

Title: Multiplication of large integers using Karatsuba and classical algorithms.

Description:

Karatsuba algorithm for multiplication is a recursive algorithm running in a time proportional to n which offers an asymptotic advantage over classical paper-and-pencil algorithm of n^2 complexity (n is the number of bits of the operands). Your task is to implement both classical and Karatsuba algorithms in software and find out what is the minimum operand length for which Karatsuba algorithm runs faster.

Literature:

  1. D.E. Knuth, "The Art of Computer Programming," vol. 2, Seminumerical Algorithms, Addison Wesley, 1969, pp. 258-259.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 14.8, pp. 630-631.
Prerequisites: C or Pascal in PC or UNIX environment, recursion
 

Project IS-2

Title: Multiplication of large integers using Fast Fourier Transform and classical algorithms.

Description:

Algorithm based on Fast Fourier Transform, known also as Schonhage-Strassen algorithm, is asymptotically the fastest known algorithm for multiplication, running in a time proportional to n*ln(n) (where n is the number of bits of the operands). However, its advantage appears only for very large operands. Recent fast progress in traditional factorization algorithms together with appearance of factorization algorithms based on quantum computing may cause that in the near future the only safe lengths of the operands used within the RSA cryptosystem will be in the range of several thousands of bits, where Schonhage-Strassen algorithm shows its superiority. Your task is to implement both classical and Schonhage-Strassen algorithms in software and find out what is a minimum operand length for which the latter algorithm runs faster.

Literature:

  1. D. Guinier, "The Multiplication of Very Large Integers Using the Discrete Fast Fourier Transform," ACM-SIGSAC Review, Special Group on Security Audit and Control, vol. 9, no. 3, Summer 1991, pp. 26-36.
Prerequisites: C or Pascal in PC or UNIX environment, FFT
 

Project IS-3

Title: Modulo reduction of large integers using classical and Barett algorithms.

Description:

Barett algorithm for modular reduction of large integers is based on a very simple idea, similar to the way you perform calculations using your calculator. When efficiently implemented this algorithm can work significantly faster than classical algorithm based on trial divisions. Your task is to implement both algorithms, find out the regions of superiority of one of the algorithms (as a function of the operand length), and assess how big can be the performance advantage of the Barett algorithm over classical algorithm for different operand lengths.

Literature:

  1. A. Bosselaers, R. Govaerts, and J. Vandewalle, "Comparison of Three Modular Reduction Functions," Proc. CRYPTO'93, pp.175-186.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 14.3.3, pp. 603-604.
  3. P. Barett, "Implementing the Rivest Shamir and Adleman Public Key Encryption Algorithm on a Standard Digital Signal Processor," Proc. CRYPTO'86, pp. 311-323.
Prerequisites: C or Pascal in PC or UNIX environment
 

Project IS-4

Title: Modular multiplication using Montgomery method.

Description:

Montgomery method is considered as a fastest algorithm for modular multiplication reported in the open literature. (At the ramp session of the last CRYPTO'95 Josh Benaloh announced that a faster algorithm was developed in Microsoft, but the details were not revealed, and the algorithm remains proprietary.) The idea of Montgomery algorithm is similar to multiplication using Fast Fourier Transform. We want to compute C=A*B mod N. The arguments A and B are first transformed to other domain by computing A'=A*2^n mod N, and B'=B*2^n mod N. In this new domain modular multiplication can be performed significantly faster than in the original domain. This is achieved by computing C'=A' * B' * 2^-n mod N = (A*B) * 2^n mod N = C*2^n mod N. The return to original domain is equivalent to multiplication by 1 in a new domain (C=1*C' * 2^-n mod N). Modular exponentiation is a basic transformation of majority of public key cryptoalgorithms. In exponentiation, a long sequence of modular multiplications is performed. For such sequence, input and output transformations are performed only once. Switching to other domain can therefore significantly speed-up the whole exponentiation. Knowledge of Montgomery algorithm is crucial for designers of majority of today's software and hardware public-key-cryptosystem implementations. Your task is to develop the efficient implementation of Montgomery algorithm, and compare your implementation with implementations of modular multiplication available in public domain (RSAREF, PGP, or PEM).

Literature:

  1. S. Dusse, B. Kaliski Jr., "A Cryptographic Library for the Motorola DSP56000," Proc. EUROCRYPT'90, pp. 230-243.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 14.3.2, pp. 600-603.
  3. A. Bosselaers, R. Govaerts, and J. Vandewalle, "Comparison of Three Modular Reduction Functions," Proc. CRYPTO'93, pp.175-186.
  4. P.L. Montgomery, "Modular Multiplication without Trial Division," Mathematics of Computation, vol. 44, 1985, pp. 519-521.
  5. RSAREF library, available via ftp from ftp://ftp.rsa.com.
Prerequisites: C or Pascal in PC or UNIX environment

Project IS-5

Title: Implementation of a chosen elliptic curve cryptosystem according to the IEEE 1363 standard.

Description:

Elliptic curve cryptosystems is a new emerging group of public key cryptosystems which may successfully compete with RSA in a near future. It is believed that the same strength can be achieved by using 1024-bit key in RSA, and only 160-bit key in the corresponding elliptic curve cryptosystem. Additionally, the strength of elliptic curve cryptosystems grows much faster with key size increases than does the strength of RSA or DSS. As a result very compact and fast implementations of these cryptosystems are possible, making them the best choice for implementations where computational power, or bandwidth are limited.

Literature:

  1. A. Menezes, "Elliptic Curve Cryptosystems," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 1-4.
  2. S. Vanstone, "An Introduction to Elliptic Curve Cryptosystems," presentation at RSA Data Security Conf., January 1995.
  3. D. R. Stinson, "Cryptography: Theory and Practice," chapter 5.2.
  4. A. Menezes, "Elliptic Curve Public Key Cryptosystems," Kluwer Academic Publishers, 1993.
  5. "IEEE 1363: Standard for Public Key Cryptography," available by www from http://stdsbbs.ieee.org/groups/1363/
  6. A. Menezes and S. Vanstone, "Elliptic Curve Cryptosystems and Their Implementation," Journal of Cryptology, vol. 6, no.4, Autumn 1993, pp. 209-224.
  7. G. B. Agnew, R. C. Mullin, and S. A. Vanstone, "An Implementation of Elliptic Curve Cryptosystems Over F(2^155)," IEEE Journal on Selected Areas in Communications, vol. 11, no. 5, June 1993.
  8. C. C. Wang, et. al, "VLSI Architectures for Computing Multiplications and Inverses in GF(2^n)," IEEE Transactions on Computers, vol. C-34, no. 8, Aug. 1985.
  9. G. B. Agnew, et al., "An Implementation for a Fast Public-Key Cryptosystem," Journal of Cryptology, vol. 3, 1991, pp. 63-79.
  10. G. B. Agnew, et al., "Arithmetic Operations in GF(2^n)," Journal of Cryptology, vol. 6, 1993, pp. 3-13.
Prerequisites: basics of number theory
 

Hardware:

Project IH-1

Title: Modular exponentiation of large integers using Montgomery algorithm.

Description:

The fastest reported hardware implementation of the RSA cryptosystem [1] runs with the speed of 1Mb/s for RSA 512-bit key. This speed is much greater than for earlier reported designs. Achieving such a high performance was possible by employing Montgomery algorithm (called also Hensel's odd division) and several other design tricks, such as quotient pipelining, carry completion adder, etc. Your task is to implement modular exponentiation with Montgomery algorithm at the logic level and try to investigate the possibilities of further improvements in the circuit performance and chip complexity.

Literature:

  1. M. Shand, J. Vuillemin, "Fast Implementations of RSA Cryptography," Proc. 11th IEEE Symposium on Computer Arithmetic, pp. 252-259.
  2. S. Dusse, B. Kaliski Jr., "A Cryptographic Library for the Motorola DSP56000," Proc. EUROCRYPT'90, pp. 230-243.
Prerequisites: logic level design within Cadence or other CAD environment
 

Project IH-2

Title: Modular exponentiation with a systolic array.

Description:

One of the modern methods for design of high-performance digital circuits is the use of systolic arrays. A systolic array is a regular structure, composed of only few different types of simple cells. In a systolic array information is exchanged only between neighboring cells, and the data flows simultaneously in at least two different directions. Design based on the concept of systolic array is parallel in nature, perfectly scalable, and easy to layout. Several such designs were proposed in the literature to be used for fast modular exponentiation of large integers for RSA implementations; there is no agreement, however, which of them offers the best performance together with minimum complexity. Your task is to choose the best, according to you, reported design, implement it at the logic level, verify its functionality, assess the number of gates, and predict the maximum speed of the circuit.

Literature:

  1. S. Even, "Systolic Modular Multiplication," Proc. CRYPTO'90, pp. 619-624.
  2. K. Iwamura, T. Matsumoto, and H. Imai, "High-Speed Implementation Methods for RSA scheme," Proc. CRYPTO' 92, pp. 221-238.
  3. K. Iwamura, T. Matsumoto, and H. Imai, "An Implementation Method for RSA Cryptosystem with Parallel Processing," Transactions of the Institute of Electronics, Information, and Communication Engineers, vol. J75-A, no. 8, August 1992, pp. 1301-1311.
  4. J. Sauerbrey, "A Modular Exponentiation Unit Based on Systolic Arrays".
  5. P. Kornerup, "A Systolic, Linear-Array Multiplier for a Class of Right-Shift Algorithms," IEEE Transactions on Computers, vol.43, no.8, August 1994.
Prerequisites: logic level design in Cadence or other CAD environment

Project IH-3

Title: Elliptic curve cryptosystem.

Description:

Elliptic curve cryptosystems is a new emerging group of public key cryptosystems which may successfully compete with RSA in a near future. It is believed that the same strength can be achieved by using 1024-bit key in RSA, and only 160-bit key in the corresponding elliptic curve cryptosystem. Additionally, the strength of elliptic curve cryptosystems grows much faster with key size increases than does the strength of RSA or DSS. As a result very compact and fast implementations of these cryptosystems are possible, making them the best choice for implementations where integrated-circuit space, computational power, or bandwidth are limited. Your task is to implement the Diffie-Hellman elliptic curve system at the logic level and assess its maximum performance.

Literature:

  1. A. Menezes, "Elliptic Curve Cryptosystems," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 1-4.
  2. S. Vanstone, "An Introduction to Elliptic Curve Cryptosystems," presentation at RSA Data Security Conf., January 1995.
  3. D. R. Stinson, "Cryptography: Theory and Practice," chapter 5.2.
  4. A. Menezes, "Elliptic Curve Public Key Cryptosystems," Kluwer Academic Publishers, 1993.
  5. "IEEE 1363: Standard for Public Key Cryptography," available by www from http://stdsbbs.ieee.org/groups/1363/
  6. A. Menezes and S. Vanstone, "Elliptic Curve Cryptosystems and Their Implementation," Journal of Cryptology, vol. 6, no.4, Autumn 1993, pp. 209-224.
  7. G. B. Agnew, R. C. Mullin, and S. A. Vanstone, "An Implementation of Elliptic Curve Cryptosystems Over F(2^155)," IEEE Journal on Selected Areas in Communications, vol. 11, no. 5, June 1993.
  8. C. C. Wang, et. al, "VLSI Architectures for Computing Multiplications and Inverses in GF(2^n)," IEEE Transactions on Computers, vol. C-34, no. 8, Aug. 1985.
  9. G. B. Agnew, et al., "An Implementation for a Fast Public-Key Cryptosystem," Journal of Cryptology, vol. 3, 1991, pp. 63-79.
  10. G. B. Agnew, et al., "Arithmetic Operations in GF(2^n)," Journal of Cryptology, vol. 6, 1993, pp. 3-13.
Prerequisites: logic level design within Cadence, Altera or other CAD environment

Project IH-4

Title: IDEA.

Description:

IDEA cipher designed in 1990-1992 at ETH in Zurich (European equivalent of MIT) is a possible replacement for the old American standard DES. Bruce Schneier calls IDEA "the best and most secure block algorithm available to the public at this time." The algorithm is already implemented within PGP (Pretty Good Privacy) - an Internet defacto standard for secure mail. The input and output blocks in the IDEA are 64 bit long, while the key is 128-bit long (as compared to 56-bit key in the DES). There are only 3 internal operations of the algorithm, all operating on 16-bit sub-blocks, namely: a) XOR, b) addition modulo 2^16, and c) multiplication modulo 2^16 + 1. Your task is to design and optimize the circuit that implements the IDEA algorithm. The circuit can be implemented as an ASIC or FPGA.

Literature:

  1. Ch. Kaufman, R. Perlman, and M. Speciner, "Network Security: Private Communication in a Public World," PTR Prentice Hall, Englewood Cliffs, 1995, pp. 74-79.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 7.6, pp. 263-266.
  3. A. Curiger, H. Bonnenberg, R. Zimmerman, N. Felber, H. Kaeslin, and W. Fichtner, "VINCI: VLSI Implementation of the New Block Cipher IDEA," Proc. IEEE CICC'93, San Diego, CA, May 1993, pp. 15.5.1-15.5.4.
  4. A. Curiger and B. Stuber, "Specification for the IDEA Chip," Technical Report No. 92/03, Institut fur Integrierte Systeme, ETH Zurich, Feb. 1992.
Prerequisites: logic level design within Cadence, Altera or other CAD environment

Generation and management of cryptographic keys

Software:

Project KS-1

Title: Evaluating random number generators.

Description:

Making a computer or any other electronic device to generate a truly random string of bits is much harder that it might appear. On the other hand, truly random strings of bits are indispensable for secure cryptographic systems. A primary use is to generate secret keys for classical cryptography, and private keys for public key cryptography. A poor random number generator used for this purpose may allow the attacker to optimize his exhaustive key-search attack by trying the most probable keys first. As a result, time necessary to find the correct key may be orders of magnitude shorter as compared to predicted time based on analysis of the algorithm itself. Some today's cryptoalgorithms, such as the American Digital Signature Standard (DSS), require even more random parameters, and predictability of these parameters may impose the same threat to the system security as revealing a secret or a private key. As a result, a number of vendors offer special hardware random-number generators based on various physical phenomena, such as thermal noise from a semiconductor diode, instability of a free-running oscillator, or radioactive decay. In purely software systems, such as public domain PGP or PEM, much simpler sources of randomness must be used. These include: measuring the time between successive keystrokes, mixing unpredictable multi-user system characteristics, etc. Your task is to develop a program that evaluates different random number generators on the basis of standard criteria [2]-[4], and then apply your program to test available software and hardware random number generators.

Literature:

  1. B. Schneier, "Applied cryptography," chapter 17.14, pp. 421-428.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapters 5.4.4-5.4.5, pp. 181-185.
  3. U.M. Maurer, "A Universal Statistical Test for Random Bit Generators," Journal of Cryptology, vol. 5, no. 2, 1992, pp. 89-106.
  4. "Security Requirements for Cryptographic Modules," FIPS-PUB 140-1, section 4.11, "Self-Tests," pp. 32-33.
  5. D.E. Eastlake, S.D. Crocker, and J.I. Schiller, "Randomness Recomendations for Security," RFC 1750, December 1994, available via ftp from ftp.internic.net/rfc and via www from http://www.ds.internic.net.
Prerequisites: C or Pascal in PC or UNIX environment, elementary electronics
 

Project KS-2

Title: Generating strong prime numbers for RSA using probabilistic Rabin-Miller algorithm.

Description:

Creating a pair of corresponding public and private keys for the RSA cryptosystem requires generation of two very large prime numbers. The minimum length of these primes used in today's RSA implementations is 256 bits, which corresponds to numbers in the range 10^75. Due to the fast progress in factorization algorithms used to break RSA, it is currently recommended to use 384, 512, or even 1024-bit primes. It may be therefore surprising, that generating such long primes takes only seconds on a SPARC or PC. This is possible by using fast probabilistic tests for primality. A single iteration of a probabilistic test indicates that either a tested number is a composite (for sure) or it is probably prime (with some probability). Several iterations of this test may decrease the probability of the test witnessing a composite number to be prime to 10^-20, 10^-40, or any other value chosen by the user. Since error probability of 10^-20, is still smaller than odds of winning the top prize in a U.S. state lottery and being killed by lightning in the same day, people usually do not worry about it. Two prime numbers used in the RSA cryptosystem are often required to have special properties that makes factorization of their product more difficult. A method for generating numbers with this properties, so called strong primes, takes only about 20% more time than generation of an arbitrary prime. Your task is to implement and optimize the procedure for generating strong primes using probabilistic Rabin-Miller algorithm, and examine time taken by your program to generate primes of different sizes.

Literature:

  1. Ch. Kaufman, R. Perlman, and M. Speciner, "Network Security: Private Communication in a Public World," PTR Prentice Hall, Englewood Cliffs, 1995, pp. 138-140.
  2. B. Schneier, "Applied cryptography," chapter 11.5, pp. 258-261.
  3. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 4.2, pp. 135-142.
  4. J. Brandt, I.B. Damgard, and P. Landrock, "Speeding-up Prime Number Generation," Proc. ASIACRYPT'91, pp. 440-449.
  5. J. Gordon, "Strong Primes are Easy to Find," Proc. EUROCRYPT'84, pp. 216-223.
  6. J. Brandt and I.B. Damgard, "On Generation of Probable Primes by Incremental Search," Proc. CRYPTO'92, pp. 358-369.
  7. A. Bosselaers, B. Preneel (Eds.), "Integrity Primitives for Secure Information Systems: Final Report of RACE Integrity Primitives Evaluation, RIPE-RACE 1040," chapter 9, "RSA Key Generation," Springer 1995, pp. 213-231.
  8. RSAREF library, available via ftp from ftp://ftp.rsa.com.
Prerequisites: C or Pascal in PC or UNIX environment
 

Project KS-3

Title: Deterministic method of prime number generation for RSA using Maurer method.

Description:

Testing whether a randomly selected number is prime is much easier than factoring (polynomial vs. subexponential complexity). Still, proving (with certainty 1.0) that a given large number is prime, is not easy and efficient enough to be used for generating keys in cryptographic applications. As a result two practical approaches were developed for generating random large prime numbers. First (see project KS-2) is to prove primality of the given number with some small error probability acceptable to the user (e.g. 10^-20). Second approach is to generate a prime number together with a certificate of primality, which gives an absolute certainty that the number is prime. Surprisingly, most efficient algorithms from both groups, probabilistic Miller-Rabin algorithm, and deterministic Maurer algorithm take approximately the same time to execute. Deterministic method developed by Maurer in 1989 is a recursive algorithm. The algorithm is based on the forgotten theorem due to Pocklington published as early as in 1914. The popularity of Maurer's algorithm is growing but still a vast majority of cryptographic products use probabilistic primality testing. Your task is to implement Maurer's method, and examine time taken by your program to generate primes of different sizes.

Literature:

  1. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapter 4.4.4, pp. 152-154.
  2. U.M. Maurer, "Fast Generation of Prime Numbers and Secure Public-Key Cryptographic Parameters," Journal of Cryptology, vol. 8, 1995, pp. 123-155.
  3. J. Brandt, I. Damgard, and P. Landrock, "Speeding-up Prime Number Generation," Proc. ASIACRYPT'91, pp. 440-449.
  4. A. Bosselaers, B. Preneel (Eds.), "Integrity Primitives for Secure Information Systems: Final Report of RACE Integrity Primitives Evaluation, RIPE-RACE 1040," chapter 9, "RSA Key Generation," Springer 1995, pp. 213-231.
  5. RSAREF library, available via ftp from ftp://ftp.rsa.com.
Prerequisites: C or Pascal in PC or UNIX environment, recursion
 

Analytical:

Project KA-1

Title: Organization of the Certification Authority for the University or large company.

Description:

Last year hundreds of U.S. companies started to develop their proprietary Certification Authorities. Several announced their plans to become the Central Certification Authority for U.S. or even the whole Internet. People started to ask: "Why does everyone want to be a Certification Authority? Is it because of patriotism, power, money, or maybe other reasons?" Certification Authorities (CAs) are necessary to make use of public key cryptography for commercial purposes safe, easy and efficient. Their primary function is to register public keys generated by individuals, and issue so called certificates (called also digital IDs) that bind a public key to a given person. In their simplest form, certificates contain a public key and a name signed with the CA's private key. Certificates, as opposed to public keys used alone, can be safely stored in a public directory, and send over an insecure network. They allow everyone to securely communicate (and do business) with people they have never met before, without any earlier arrangements. Creating a Certification Authority for any large organization or company requires a lot of technical (as well as administrative) decisions. You must decide what software/hardware to use, what services/algorithms/standards should be supported, how keys are generated and stored, what is a policy regarding proofs of user's identity, how long keys are valid, whether they can be revoked, to which public-key hierarchy (hierarchies) should your Certification Authority subscribe, etc. Your task is to develop a project of the Certification Authority for the whole UofR, your Department, or a large company of your choice.

Literature:

  1. "VeriSign Information Desk About Digital IDs," available via www from http://digitalid.verisign.com/info_ctr.htm.
  2. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone, "Handbook of Applied Cryptology," chapters 1.11, 13.4, 13.6.
  3. VeriSign Certification Practice Statement, Proc. RSA Data Security Conf. 1997.
  4. Utah Digital Signature Law, Salt Lake City, 1995.
  5. D.E. Coe and F.J. Smith, "Developing and Deploying a Corporate-Wide Digital Signature Capability," ACM-SIGSAC Review, Special Group on Security Audit and Control, pp. 5-8.
  6. W. Ford, "Advances in Public-Key Certificate Standards," ACM-SIGSAC Review, Special Group on Security Audit and Control, pp. 9-15.
  7. S. Kent, "Reasoning About Public-Key Certification," presentation at RSA Data Security Conf., January 1995.
Prerequisites: None
 

E-mail & WWW securit

Software/Analytical:

Project MS-1

Title: On-line time-stamping.

Description:

Timestamping is a method to prove that a given electronic document existed at a specific time and that none single bit of this document was modified since then. It prevents backdating of electronic documents and any form of tamparing with them. Timestamping supports the use of digital signatures which only associate the author with the document and have no mechanism to prove the time when the document was created. Actually, digital signatures can hardly be used alone for any purpose. Surprisingly, existing implementation of timestamping does not require access to the contents of the document; does not require any public, private, or secret key transformations; and can be renewed and thus remain secure almost indefinitely long. Timestamping, instead of relying on ciphers and the security of their keys, is based on keyless hash functions and a regular publication of a few lines in the Public and Commercial Notices section of the Sunday New York Times. It is very efficient, can be performed on-line and applied to any kind of data (documents, mail, graphics, images, video, etc.). Your task is to develop your own implementation of timestamping service for use within UofR, and try to integrate it with a selected application, such as electronic mail or word processor.

Literature:

  1. "Answers to Frequently Asked Questions about Record Authentication and Digital Notary Service," available via www under http://www.surety.com.
  2. S. Haber, B. Kaliski, and W. S. Stornetta, "How Do Digital Time-Stamps Support Digital Signatures?," RSA Laboratories' CryptoBytes, vol. 1, no. 3, autumn 1995, pp. 14-15.
  3. S. Haber and W. S. Stornetta, "How to Time-Stamp a Digital Document," Journal of Cryptology, vol. 3, 1991, pp. 99-111.
  4. D. Bayer, S. Haber, and W. S. Stornetta, "Improving the Efficiency and Reliability of Digital Time-Stamping," Sequences'91: Methods in Communication, Security, and Computer Science, Springer Verlag, 1992, pp. 329-334.
  5. D. Trowbridge, "Imagine a Notary Stamp for Electronic Documents," Computer Technology Review, vol. XV, no. 4, April 1995.
  6. Information about services provided by Surety Technologies, Inc., available via www under http://www.surety.com.
Prerequisites: C in UNIX environment, basics of client/server protocols
 

Project MS-2

Title: Simple payment protocol for a cybermall under WWW.

Description:

Imagine that you have a fantastic product to sell over the network. You want to create a cybermall and make your product available to all cyberusers. The only thing that scares you is the security of your site and the security of payment transactions that will be conducted over the Internet. You know that there are basically two methods to deal with payments over the network: first involves the use of credit cards numbers, transmitted securely over the network; the seconds assumes the use of digital cash - electronic equivalent of paper bills and metal coins, permitting a full anonymity of the buyer. You wonder, which of these methods would be preferred by your customers. Which of them is more secure from your point of view, and easier to implement? You also know that there are several recently proposed standards for secure WWW protocol (SSL, S-HTTP), and that there exist products that already implement strong authentication and confidentiality. But are they save? Are they easy to use? Maybe it is better to write something by yourself? Your task is to investigate existing solutions to the problem of secure cybermall, choose the best one, and possible improve it, by writing your own piece of software.

Literature:

  1. L.D. Stein, "How to Set Up and Maintain a World Wide Web Site - The Guide to Information Providers," chapter 4 - "Security," Addison-Wesley, 1995, pp. 125-167.
  2. B. Schneier, "Applied cryptography," chapter 6.4, pp. 138-147.
  3. Information available via www from http://www.cybercash.com.
  4. Information available via www from http://www.digicash.com.
Prerequisites: http and other Internet protocols
 

Analytical:

Project MA-1

Title: Comparison of security standards for multimedia mail S/MIME & MOSS, analysis of appearing implementations.

Description:

PGP (Pretty Good Privacy) is today's defacto standard for secure Internet mail. It is very well suited for individual people protecting their private correspondence, but not easily scalable to large enterprises and commercial use. PEM (Privacy Enhanced Mail) was a first attempt to develop a specification for secure mail used for commercial purposes. It has not not become very popular mainly because of the lack of public key infrastructure (Certification Authorities) it demands. The other drawback of this standard are overly restricted formats and mechanisms (7-bit text messages only, no privacy without authentication, etc.). Together with emergence of multimedia mail MIME, a new standard for secure electronic mail is needed. Two such standards have been proposed: one called MOSS (MIME Object Security Services) was published in October 1995, and developed mainly by Trusted Information Systems; the other S/MIME (Secure MIME) has been developed by the RSA Data Security, Inc. The main purpose of the standardization is to provide interoperability between e-mail packages coming from different vendors. As a result only one of this projects has a chance to succeed. Public- domain implementations of both standards have already been released, and various vendors announced release of commercial products embracing one of these standards in 1996. Your task is to analyze both proposals and their implementations, find out similarities and differences between them, and try to predict which of this standards will become more popular on the Internet.

Literature:

About S/MIME:

  1. "Frequently Asked Questions about S/MIME," available via www from http://www.rsa.com/smime.
  2. "S/MIME Resources," available via www from http://www.rsa.com/smime/html/resources.html.
  3. S. Dusse, "Developing S/MIME - Complaint Applications," presentation at the RSA Data Security Conf., January 1996.
About MOSS:
  1. S. Crocker, N. Freed, J. Galvin, and S. Murphy, "MIME Object Security Services," RFC 1848, October 1995, available via ftp from ftp.internic.net/rfc and via www from http://www.ds.internic.net.
  2. "Internet MIME Object Security Services Standard (TIS/MOSS)," available via www from http://www.tis.com under Network Security.
  3. "Trusted Mail@ (TMail)," available via www from http://www.tis.com under Network Security.
Prerequisites: None
 

Authentication and access control

Software:

Project AS-1

Title: Building a simple firewall using a public domain software toolkit provided by Trusted Information Systems.

Description:

Firewalls are the hottest and the best selling security product in the history. A single firewall can cost as much as $100,000, and the number of their vendors is constantly growing. On the other hand, you can pretty easily construct a simple (and almost as secure) firewall for free, using public domain software toolkits. The basic idea of the firewall is to separate your internal network from the global Intenet. All traffic coming from the Internet or going out from the internal network must pass through the firewall. The firewall makes decisions whether this traffic should be permitted or blocked. By appropriatly building or configuring firewall you can block traffic from the outside to the inside (e.g., limit it to e-mail only), but permit users on the inside to communicate freely with the outside. Your task is to choose a part of the University network that should be (according to you) protected by a firewall; define what kind of traffic should be permitted; and then implement your firewall using a public domain software toolkit distributed by the Trusted Information Systems.

Literature:

  1. L.D. Stein, "How to Set Up and Maintain a World Wide Web Site - The Guide to Information Providers," Addison-Wesley, 1995, pp. 152-167.
  2. "FAQ: Internet Firewalls Frequently Answered Queastions," available via www from http://www.tis.com under Network Security / Firewall Papers.
  3. D. B. Chapman and E. D. Zwicky, "Building Internet Firewalls," O'Reilly & Associates, Inc.
  4. M.J. Ranum, "Thinking About Firewalls," available via www from http://www.tis.com under Network Security / Firewall Papers.
  5. "TIS Internet Firewall Toolkit - Overview," available via www from http://www.tis.com/docs/products/fwtk.
  6. M.J. Ranum, "A Toolkit and Methods for Internet Firewalls," available via www from http://www.tis.com under Network Security / Firewall Papers.
  7. List of Internet Security Resources, Data Security Letter, no. 66, December 1995.
Prerequisites: Internet services and protocols (http, ftp, e-mail, telnet, etc.) and their implementation under UNIX

Project AS-2 (by Hoss Firooznia)

Title: Building a better 'su'

Description:

To perform administrative tasks on a UNIX system, a manager first needs to gain root privileges. Traditionally this is accomplished in one of two ways: the administrator either logs into the system as root (discouraged due to lack of accountability), or she can use the 'su' program. Along with its cousins 'sudo' and 'osh', 'su' bestows root privileges when given the proper password: in the case of 'su', this is the root password; in the case of 'sudo' and 'osh', the administrator's own password is accepted if his or her userid appears in a pre-existing access control list.

Unfortunately, none of these methods scale very well or provide password security for remote administration over a network. The most serious problem is that all of these systems use reusable passwords vulnerable to eavesdropping. Another problem is convenience. It is generally considered poor practice to have the same root or user passwords on multiple machines, yet humans are not very good at remembering tens or hundreds of passwords, let alone one.

To address the problem of security I propose an authentication system based on one-time passwords; for convenience, the password database should be centralized, and the authentication system distributed in client/server form. Authentication will be done by a server program managing a single password database, and root privilege will be granted by client programs querying that server over an insecure network. The project will consist of two parts: First, I must design a secure protocol for this scheme, immune to eavesdropping and network spoofing. Second, I will attempt to write an administrative tool implementing this protocol, as a replacement for 'su' and similar programs.

Literature:

Prerequisites: UNIX services and protocols

Physics & cryptology

Analytical:

Project PA-1

Title: Factoring large numbers using a quantum computer according to the Shor's algorithm.

Description:

The best algorithms for factoring large integers have subexponential complexity (i.e., complexity that is greater that any polynomial complexity and smaller than the exponential complexity). This complexity is not good enough to factor numbers above 200 decimal digits, even assuming that all computers in the world were in use for hundreds of years. In 1994, Peter Shor from AT&T Bell Labs has discovered a factorization algorithm that runs in polynomial time, permitting in theory factoring numbers of almost arbitrary size. The algorithm can be implemented, however, only using a special type of a computing machine, referred as a quantum computer. As opposed to a classical computer, where logic states are represented using bits, that is as a binary "zero" or "one," in a quantum computer, logic states are represented using qubits, that is as a superposition of classical "zero" and classical "one". Every qubit may have infinitely many different values, and thus can carry much more information than a classical bit. A quantum computer is not a general purpose machine; it was shown to be able to solve very few problems, but among those almost all computationally intensive problems public key cryptography is based on. Therefore, if ever developed, quantum computation would mean the demise of RSA and other public key cryptosystems. So far, only simple quantum gates have been shown to work based on various physical principles, and there are plenty of problems to overcome before a quantum machine that can factor even a two digit number is built. It is unclear whether such a machine if ever developed can be scaled up to a computer consisting of billions of gates required to factor numbers of the size used in today's implementations of RSA. Shor's algorithm has two phases; first, based on quantum computing, second on classical computations. The classical phase involves efficient algorithms known from number theory such as continued fraction expansion and Euclid's algorithm. The entire algorithm is probabilistic, meaning that several repetitions of one or both phases, with different initial parameters may be necessary to factor a number. Your task is to analyze the algorithm, determine the expected number of repetitions necessary to complete factorization, and estimate the number of operations and quantum gates necessary to factor a number of a given size taking into account redundancy introduced by quantum error correction codes.

Literature:

  1. G. Brassard, "The Impending Demise of RSA?," RSA Laboratories' CryptoBytes, vol. 1, no. 1, spring 1995, pp. 1-4.
  2. S. L. Braunstein, "Quantum computation: a tutorial," available via www from http://chemphys.weizmann.ac.il/~schmuel/comp/comp.html.
  3. Ch. H. Bennett, "Quantum Information and Computation," Physics Today, October 1995, pp. 24-30.
  4. A. Ekert and R. Jozsa, "Quantum computation and Shor's factoring algorithm," Reviews of Modern Physics, vol. 68, no. 3, July 1996.
  5. P. W. Shor, "Algorithms for Quantum Computation: Discrete Logarithms and Factoring," Proc. 35th Annual Symp. on Foundations of Computer Science, Nov. 1994, pp. 124-134.
  6. V. Vedral, A. Barenco, and A. Ekert, "Quantum Networks for Elementary Arithmetic Operations".
  7. K. M. Obenland, "Feasibility of a Quantum Computer Architecture," Dissertation Proposal, ISI/USC, available from http://www.isi.edu/acal/quantum/quantum_intro.html.
  8. Collection of papers available from http://feynman.stanford.edu/qcomp/
Prerequisites: basics of number theory, basics of quantum physics (helpful but not required)
 

Project PA-2

Title: Solving a discrete logarithm problem using a quantum computer.

Description:

Many public key cryptosystems (such as El-Gamal encryption scheme, Diffie-Hellman key exchange, Digital Signature Standard, etc.) are based on the difficulty of computing discrete logarithms in the group GF(p) of integers modulo large prime P. The best classical algorithms for computing discrete logarithms have the same complexity as the best factorization algorithms. This means that the size of P used in implementations of these cryptoalgorithms must be as large as the size of the modulus N in RSA (above 200 decimal digits). Peter Shor who discovered a polynomial algorithm for factorization of large integers using a quantum computer (see project PA-1), has shown that quantum computations can be used as well to compute discrete logarithms. The algorithm, in general case, is more sophisticated than the algorithm for factorization, but has the same polynomial complexity. Your task is to understand the algorithm and develop a graphical representation of particular phases of the algorithm (similar to the representation used for the factorization algorithm). Additionally, you are asked to determine the expected number of repetitions of the algorithm necessary to complete computations, and estimate the number of operations and quantum gates necessary to implement the algorithm for a given size of the modulus P taking into account redundancy introduced by quantum error correction codes. The other open problem is the applicability of the Shor's algorithm to breaking elliptic curve cryptosystems.

Literature:

  1. G. Brassard, "The Impending Demise of RSA?," RSA Laboratories' CryptoBytes, vol. 1, no. 1, spring 1995, pp. 1-4.
  2. S. L. Braunstein, "Quantum computation: a tutorial," available via www from http://chemphys.weizmann.ac.il/~schmuel/comp/comp.html.
  3. Ch. H. Bennett, "Quantum Information and Computation," Physics Today, October 1995, pp. 24-30.
  4. A. Ekert and R. Jozsa, "Quantum computation and Shor's factoring algorithm," Reviews of Modern Physics, vol. 68, no. 3, July 1996.
  5. P. W. Shor, "Algorithms for Quantum Computation: Discrete Logarithms and Factoring," Proc. 35th Annual Symp. on Foundations of Computer Science, Nov. 1994, pp. 124-134.
Prerequisites: basics of number theory, basics of quantum physics (helpful but not required)
 

Project PA-3

Title: Quantum cryptography: implementation and security of various schemes for quantum key exchange.

Description:

Quantum cryptography permits exchange of a secret key between two people who never met before. Its security is considered as almost absolute, as it is based on fundamental laws of quantum physics (in particular on the Heisenberg uncertainty principle) as opposed to unproven mathematical assumptions, such as the difficulty of factoring large integers, public key cryptography relies on. On top of this, quantum cryptography permits communicating parties to discover any attempt of an adversary to passively eavesdrop the channel, the property considered as impossible to achieve by any classical means. The idea of quantum cryptography was first invented by Stephen Wiesner in the late 1960's, but his paper written at that time had been rejected by reviewers, and waited for publication more than ten years. Rediscovered at the beginning of 1980's, quantum cryptography was still considered as a science fiction, smart but unrealistic idea of a couple of crazy physicists. Surprisingly, the first apparatus implementing quantum cryptography was built as early as in 1989. Although the first version was still primitive, permitting only communication in the open air at the distance of about one foot; subsequent versions developed by various groups in U.S., Canada, and Europe allowed communication through a standard commercial optical cable for distances in excess of 20 km Currently, it seems that the main barriers to acceptance of quantum cryptography are: the cost of quantum communication equipment, and the limited distance between communicating parties. Security of the quantum key exchange in current implementations cannot be considered as absolute either because of the discrepancy between the theoretical model of a protocol and its actual implementation (e.g., the use of faint light pulses instead of single photons). Your task is to investigate different schemes for quantum key exchange, and assess their security against various eavesdropping strategies.

Literature:

  1. I. Peterson, "Bits of Uncertainty - Blazing a quantum trail to absolute secrecy," Science News, vol. 149, Feb. 10, 1996, pp. 90-92.
  2. Ch. H. Benett, G. Brassard, and A. K. Ekert, " Quantum cryptography," Scientific American, October, 1992, pp. 50-57.
  3. B. Schneier, " Applied cryptography," chapter 23.16, pp. 554-557.
  4. Ch. Benett, F. Bessette, G. Brassard, L. Salvail, and J. Smolin, "Experimental quantum cryptography," Journal of Cryptology, vol. 5, no. 1, 1992.
  5. "A Bibliography of Quantum Cryptography" by Gilles Brassard, available via www from http://www.iro.umontreal.ca/~crepeau/Biblio-QC.html .
Prerequisites: basics of quantum physics and optics
 

Law & cryptology

Analytical:

Project LA-1

Title: Export control of strong cryptography and key recovery mechanisms.

Description:

For years strong cryptography was treated as munitions and its export from U.S. was completely banned. Philip Zimmerman, the author of PGP - the most widespread program for protection of electronic mail on the Internet, has been accused of violating export restrictions by making source code of his program available by ftp. He was threatened to be sent to jail for several years and pay a huge fine, before the investigation was finally dropped under the pressure of public opinion. Similar charges threatens everybody who takes abroad a diskette accompanying the Bruce Schneier's book, even though the book itself, containing listings of programs from this diskette, can be freely published in other countries, According to the survey organized by Trusted Information Systems, there exist approximately as many foreign as domestic cryptographic products and companies. Majority of foreign products employ algorithms developed in U.S. In many cases, foreign products are of the same quality as those developed in U.S., and have the clear advantage of being allowed to be sold all over the world (including U.S.). American products must be developed in two versions, one for domestic use, with strong cryptography, and the other "international" version, with weakened cryptoalgorithms (key size limited to 40 bits). This situation has caused a growing resistance of American industry, and brought several small changes to government position during the past few years. Still, the controversy seems to be far from being resolved. A new hot dispute emerged after the government attempt to introduce a Clipper chip, a new American standard developed by NSA that provided law-enforcement agencies with the ability to conduct electronic surveillance of encrypted data. The standard was rejected by commercial sector, but its idea initiated the development of alternative key recovery schemes. Currently, the issues of export control and key recovery seem to be inadvertently entangled, as according to the new government position, cryptographic products that use strong cryptography are exportable under a general licensing program only if they contain a key recovery mechanism.

Literature:

  1. D. Denning, "The Clipper Encryption Systems," American Scientist, vol. 81, 1993, pp. 319-323.
  2. S. Osuna and K. Mendelson, "International Policy Committee (IPC) Activities on Cryptography," Data Security Letter, no. 68, Feb. 1996.
  3. D. Branstad, "Export of Key Escrow Cryptography: A New Goverment Position," Data Security Letter, no. 63, Sep. 1995.
  4. D. Branstad, "Goverment Announces New Cryptography Policy," Data Security Letter, no. 76, Nov. 1996.
  5. "How to use Key Escrow," collection of articles for a special issue of Communications of the ACM, March 1996, vol. 39, no. 3.
+ plenty of other papers

Prerequisites: basics of business law and international finance
 

Project LA-2

Title: Legal and liability issues related to the acceptance of digital signatures and operation of Certification Authorities.

Description:

The digital signature can perform the same functions as handwritten signature, permitting financial transactions over the network between people who never met before. Nevertheless, its use is not binding according to the today's law. Utah was the first U.S. state that introduced law regarding the use of digital signatures for electronic commerce in 1995. The same year, the similar law was introduced in California. Currently, several other states prepare the legislation regarding digital signatures. This legislation does not only equate the legal status of handwritten and digital signatures. It also provides the legal bounds for the operation of Certification Authorities (see project KA-1), which register public keys used to verify signatures, and issue public key certificates (called also digital IDs) that bind a public key to a given person. Your task is to analyze the current state of legislation regarding digital signatures and liability of Certification Authorities.

Literature:

  1. B. Schneier, "Applied cryptography," chapter 25.16, pp. 618-619.
  2. Utah Digital Signature Law, Salt Lake City, 1995.
  3. R. Horning, "The enforceability of contracts negotiated in cyberspace," Proc. RSA Data Security Conf. 1997.
  4. "VeriSign Information Desk About Digital IDs," available via www from http://digitalid.verisign.com/info_ctr.htm.
  5. VeriSign Certification Practice Statement, Proc. RSA Data Security Conf. 1997.
Prerequisites: basics of business law
 

Number theory & cryptology

Analytical:

Project NA-1

Title: Choosing an elliptic curve for cryptographic applications.

Description:

Elliptic curves is a new emerging class of public key cryptosystems that may successfully compete in the future with the current monopoly of the RSA cryptosystem. The security of most elliptic curve cryptosystems is based upon the difficulty of solving a discrete logarithm problem in a group of points on the elliptic curve. If the elliptic curve is chosen correctly, the best known algorithm for finding the discrete logarithm is of exponential difficulty. The security of RSA is based on the difficulty of factoring large integers, a problem for which quite efficient subexponential algorithms (such as Number Field Sieve) have been developed (see project CS-3). As a result, the cryptographic keys may be significantly shorter for elliptic curve cryptosystems compared to RSA, which results in faster and more compact implementations. The other advantage of using elliptic curves for constructing cryptosystems is that each user may (but does not need to) choose his/her own curve. If this is the case, reconstructing the private key of one user, does not provide any information that could be used to reconstruct the private key of another user. According to the current knowledge, choosing an appropriate elliptic curve is equivalent to calculating the number of points on the elliptic curve, and checking whether this number has a large prime divisor. Only in this case, a discrete logarithm problem in a group of points on the elliptic curve has exponential complexity. Several methods have been devised for choosing an appropriate elliptic curve. The anex to the IEEE 1363 standard lists three of them: counting number of points on a randomly chosen curve using Schoof's algorithm, constructing the curve using the Weil theorem, and constructing the curve using the method of complex multiplication. Your task is to either perform a comparative analysis of the last two methods, or to prepare a detailed specification of the Schoof's algorithm, and try to estimate its running time.

Literature:

  1. A. Menezes, "Elliptic Curve Cryptosystems," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 1-4.
  2. S. Vanstone, "An Introduction to Elliptic Curve Cryptosystems," presentation at RSA Data Security Conf., January 1995.
  3. D. R. Stinson, "Cryptography: Theory and Practice," chapter 5.2.
  4. A. Menezes, "Elliptic Curve Public Key Cryptosystems," Kluwer Academic Publishers, 1993.
  5. "IEEE 1363: Standard for Public Key Cryptography," available by www from http://stdsbbs.ieee.org/groups/1363/
  6. R. Lercier and F. Morain, "Counting the number of points on elliptic curves over finite fields: strategies and performances," Proc. EUROCRYPT'95, pp. 79-94.
  7. A. Menezes, S. Vanstone, and R. Zuccherato, "Counting points on elliptic curves over F(2^m)," Math. Comp., vol. 60, Jan. 1993, pp. 407-420.
  8. A. Miyaji, "Elliptic curves over F(p) suitable for cryptosystems," Proc. AUSCRYPT'92, pp. 479-491.
  9. A. Menezes and S. Vanstone, "Elliptic Curve Cryptosystems and Their Implementation," Journal of Cryptology, vol. 6, no.4, Autumn 1993, pp. 209-224.
Prerequisites: basics of number theory
 

Project NA-2

Title: Efficient operations on points of an elliptic curve over the Galois field GF(2^n).

Description:

Elliptic curves for cryptographic applications may be constructed over fields GF(p), where p is a large prime, and GF(2^m) where m is some integer. Additionally, the operations in the field GF(2^m), addition, multiplication, division, and remainder, can be defined in various ways, leading to equivalent (isomorphic) fields. Choosing these operations affects the efficiency and complexity of the hardware and software implementations of elliptic curve cryptosystems. Two ways of defining GF(2^m) given in the IEEE P1363 standard use so called polynomial and optimal normal bases. Choice of any of these bases leads to hardware implementation that is more efficient than hardware implementation of operations in GF(p). Your task is to analyze and compare architectures used to implement multiplication and inversion in the fields GF(2^m) with polynomial and optimal normal bases, and estimate the running times for these operations.

Literature:

  1. A. Menezes, "Elliptic Curve Cryptosystems," RSA Laboratories' CryptoBytes, vol. 1, no. 2, summer 1995, pp. 1-4.
  2. S. Vanstone, "An Introduction to Elliptic Curve Cryptosystems," presentation at RSA Data Security Conf., January 1995.
  3. D. R. Stinson, "Cryptography: Theory and Practice," chapter 5.2.
  4. A. Menezes, "Elliptic Curve Public Key Cryptosystems," Kluwer Academic Publishers, 1993.
  5. "IEEE 1363: Standard for Public Key Cryptography," available by www from http://stdsbbs.ieee.org/groups/1363/
  6. G. B. Agnew, et al., "Arithmetic Operations in GF(2^n)," Journal of Cryptology, vol. 6, 1993, pp. 3-13.
  7. C. C. Wang, et. al, "VLSI Architectures for Computing Multiplications and Inverses in GF(2^n)," IEEE Transactions on Computers, vol. C-34, no. 8, Aug. 1985.
  8. G. B. Agnew, R. C. Mullin, and S. A. Vanstone, "An Implementation of Elliptic Curve Cryptosystems Over F(2^155)," IEEE Journal on Selected Areas in Communications, vol. 11, no. 5, June 1993.
  9. A. Menezes and S. Vanstone, "Elliptic Curve Cryptosystems and Their Implementation," Journal of Cryptology, vol. 6, no.4, Autumn 1993, pp. 209-224.
  10. G. B. Agnew, et al., "An Implementation for a Fast Public-Key Cryptosystem," Journal of Cryptology, vol. 3, 1991, pp. 63-79.
Prerequisites: basics of number theory