Description:
Bruce Schneier in his book "Applied cryptography" states that if a software security program proclaims that it has a "proprietary" encryption algorithm - significantly faster than DES - the odds are that it is some variant of Vigenere polyalphabetic cipher. Cryptanalysis of the Vigenere cipher is believed to take only few seconds with a computer. The algorithm for breaking the cipher is based on so called "index of coincidence," and is very well described in the literature. Your task is to implement this algorithm, test it, and then try to apply your program to files encrypted using Word Perfect, Lotus 1-2-3, etc. Additional task would be to create an updated list of applications such as Word Perfect, etc. that still use a "proprietary" algorithm based on Vigenere cipher (see the catalog of Access Data [2] for a couple of likely candidates).
Literature:
Description:
In December 1995, Paul C. Kocher, a 22-year old independent cryptography consultant from Stanford, CA, released a draft version of his paper warning vendors and users of two most widespread public key cryptoalgorithms, RSA and DSS, about the attack he devised. The attack works under the condition that the intruder can measure the amount of time used to complete cryptographic operations. Extra computations necessary to mount the attack take time proportional to the number of bits of the operands, leading to a full recovery of a secret (private) key in less then one hour for practical operand lengths. Paul is working on a complete implementation of his idea, but he already presented his preliminary results at the RSA Data Security and CRYPTO conferences in 1996. His attack got the recognition of the most respected cryptographers, including the authors of the RSA cryptosystem. The countermeasure eliminating the threat of this attack is currently being added to software products sold by RSA Data Security Inc., it is however not obvious how to protect all implementations of the RSA and DSS against timing attacks. Your task is to implement the attack for a selected implementation of the RSA available in a public domain (RSAREF library, PGP, or PEM) on the basis of Kocher's preliminary papers. Additional tasks could be to analyze possible countermeasures, and find the dependence between the difficulty of the attack and the algorithm for modular multiplication used in the implementation of the cryptoalgorithm.
Literature:
e-mail: pck@cryptography.com
address: P.O. Box 8243, Stanford, CA, 94309, USA
Prerequisites: C in UNIX environment, basics of probability theory
Description:
Security of the RSA cryptosystem is based on the difficulty of finding prime factors of large numbers. Therefore, RSA Data Security, Inc., which holds a patent for RSA, organizes a Factoring Challenge. Fifty one integer numbers, in the range from 100 to 500 decimal digits, were generated by choosing two primes at random and multiplying them together, according to the rules used in the RSA cryptosystem. Prime factors were afterwards destroyed. Anybody, who is able to factor a smallest (!) unfactored number from the list, receives an award. If there are no successful attempts within a given quarter, the award accumulates. The largest number from the list factored so far had 130 decimal digits. Computations are performed simultaneously by many volunteers from all over the world who use spare time of their machines to complete initial, most time-consuming, phase of the algorithm. Obtained partial results are send over the Internet to one place, where the final result is calculated. Two fastest algorithms devised to date are: Quadratic Field Sieve (QFS) and Number Field Sieve (NFS). The latter is more efficient for numbers greater than 100 decimal digits, but its idea and implementation is more complicated. Your task is to get familiar with one of these algorithms and implement it. An additional task is to get familiar with public domain implementation of the program for factorization of large numbers, and associate different parts of its code with particular phases of the algorithm.
Literature:
Basic:
Description:
RC5 is a new block cipher devised by Ron Rivest - one of the inventors of the RSA cryptosystem - as an alternative for the old American standard DES. The cipher has a variable key size, and a variable input/output block size. Variable key length permits RC5 to be exported abroad, but only under the condition that the key size is less or equal to 40 bits. Such a small key length, may make the cipher vulnerable to the exhaustive key search attack. Your task is to design at the logic level the RC-5 breaking machine, and to estimate the number of chips that are necessary to break a single key within one hour. You can follow the design and cost estimates for the DES breaking machine devised by Michael Wiener [5]. Basic operations of the RC5 are: XOR, rotation, and addition.
Literature:
Description:
Karatsuba algorithm for multiplication is a recursive algorithm running in a time proportional to n which offers an asymptotic advantage over classical paper-and-pencil algorithm of n^2 complexity (n is the number of bits of the operands). Your task is to implement both classical and Karatsuba algorithms in software and find out what is the minimum operand length for which Karatsuba algorithm runs faster.
Literature:
Description:
Algorithm based on Fast Fourier Transform, known also as Schonhage-Strassen algorithm, is asymptotically the fastest known algorithm for multiplication, running in a time proportional to n*ln(n) (where n is the number of bits of the operands). However, its advantage appears only for very large operands. Recent fast progress in traditional factorization algorithms together with appearance of factorization algorithms based on quantum computing may cause that in the near future the only safe lengths of the operands used within the RSA cryptosystem will be in the range of several thousands of bits, where Schonhage-Strassen algorithm shows its superiority. Your task is to implement both classical and Schonhage-Strassen algorithms in software and find out what is a minimum operand length for which the latter algorithm runs faster.
Literature:
Description:
Barett algorithm for modular reduction of large integers is based on a very simple idea, similar to the way you perform calculations using your calculator. When efficiently implemented this algorithm can work significantly faster than classical algorithm based on trial divisions. Your task is to implement both algorithms, find out the regions of superiority of one of the algorithms (as a function of the operand length), and assess how big can be the performance advantage of the Barett algorithm over classical algorithm for different operand lengths.
Literature:
Description:
Montgomery method is considered as a fastest algorithm for modular multiplication reported in the open literature. (At the ramp session of the last CRYPTO'95 Josh Benaloh announced that a faster algorithm was developed in Microsoft, but the details were not revealed, and the algorithm remains proprietary.) The idea of Montgomery algorithm is similar to multiplication using Fast Fourier Transform. We want to compute C=A*B mod N. The arguments A and B are first transformed to other domain by computing A'=A*2^n mod N, and B'=B*2^n mod N. In this new domain modular multiplication can be performed significantly faster than in the original domain. This is achieved by computing C'=A' * B' * 2^-n mod N = (A*B) * 2^n mod N = C*2^n mod N. The return to original domain is equivalent to multiplication by 1 in a new domain (C=1*C' * 2^-n mod N). Modular exponentiation is a basic transformation of majority of public key cryptoalgorithms. In exponentiation, a long sequence of modular multiplications is performed. For such sequence, input and output transformations are performed only once. Switching to other domain can therefore significantly speed-up the whole exponentiation. Knowledge of Montgomery algorithm is crucial for designers of majority of today's software and hardware public-key-cryptosystem implementations. Your task is to develop the efficient implementation of Montgomery algorithm, and compare your implementation with implementations of modular multiplication available in public domain (RSAREF, PGP, or PEM).
Literature:
Description:
Elliptic curve cryptosystems is a new emerging group of public key cryptosystems which may successfully compete with RSA in a near future. It is believed that the same strength can be achieved by using 1024-bit key in RSA, and only 160-bit key in the corresponding elliptic curve cryptosystem. Additionally, the strength of elliptic curve cryptosystems grows much faster with key size increases than does the strength of RSA or DSS. As a result very compact and fast implementations of these cryptosystems are possible, making them the best choice for implementations where computational power, or bandwidth are limited.
Literature:
Description:
The fastest reported hardware implementation of the RSA cryptosystem [1] runs with the speed of 1Mb/s for RSA 512-bit key. This speed is much greater than for earlier reported designs. Achieving such a high performance was possible by employing Montgomery algorithm (called also Hensel's odd division) and several other design tricks, such as quotient pipelining, carry completion adder, etc. Your task is to implement modular exponentiation with Montgomery algorithm at the logic level and try to investigate the possibilities of further improvements in the circuit performance and chip complexity.
Literature:
Description:
One of the modern methods for design of high-performance digital circuits is the use of systolic arrays. A systolic array is a regular structure, composed of only few different types of simple cells. In a systolic array information is exchanged only between neighboring cells, and the data flows simultaneously in at least two different directions. Design based on the concept of systolic array is parallel in nature, perfectly scalable, and easy to layout. Several such designs were proposed in the literature to be used for fast modular exponentiation of large integers for RSA implementations; there is no agreement, however, which of them offers the best performance together with minimum complexity. Your task is to choose the best, according to you, reported design, implement it at the logic level, verify its functionality, assess the number of gates, and predict the maximum speed of the circuit.
Literature:
Description:
Elliptic curve cryptosystems is a new emerging group of public key cryptosystems which may successfully compete with RSA in a near future. It is believed that the same strength can be achieved by using 1024-bit key in RSA, and only 160-bit key in the corresponding elliptic curve cryptosystem. Additionally, the strength of elliptic curve cryptosystems grows much faster with key size increases than does the strength of RSA or DSS. As a result very compact and fast implementations of these cryptosystems are possible, making them the best choice for implementations where integrated-circuit space, computational power, or bandwidth are limited. Your task is to implement the Diffie-Hellman elliptic curve system at the logic level and assess its maximum performance.
Literature:
Description:
IDEA cipher designed in 1990-1992 at ETH in Zurich (European equivalent of MIT) is a possible replacement for the old American standard DES. Bruce Schneier calls IDEA "the best and most secure block algorithm available to the public at this time." The algorithm is already implemented within PGP (Pretty Good Privacy) - an Internet defacto standard for secure mail. The input and output blocks in the IDEA are 64 bit long, while the key is 128-bit long (as compared to 56-bit key in the DES). There are only 3 internal operations of the algorithm, all operating on 16-bit sub-blocks, namely: a) XOR, b) addition modulo 2^16, and c) multiplication modulo 2^16 + 1. Your task is to design and optimize the circuit that implements the IDEA algorithm. The circuit can be implemented as an ASIC or FPGA.
Literature:
Description:
Making a computer or any other electronic device to generate a truly random string of bits is much harder that it might appear. On the other hand, truly random strings of bits are indispensable for secure cryptographic systems. A primary use is to generate secret keys for classical cryptography, and private keys for public key cryptography. A poor random number generator used for this purpose may allow the attacker to optimize his exhaustive key-search attack by trying the most probable keys first. As a result, time necessary to find the correct key may be orders of magnitude shorter as compared to predicted time based on analysis of the algorithm itself. Some today's cryptoalgorithms, such as the American Digital Signature Standard (DSS), require even more random parameters, and predictability of these parameters may impose the same threat to the system security as revealing a secret or a private key. As a result, a number of vendors offer special hardware random-number generators based on various physical phenomena, such as thermal noise from a semiconductor diode, instability of a free-running oscillator, or radioactive decay. In purely software systems, such as public domain PGP or PEM, much simpler sources of randomness must be used. These include: measuring the time between successive keystrokes, mixing unpredictable multi-user system characteristics, etc. Your task is to develop a program that evaluates different random number generators on the basis of standard criteria [2]-[4], and then apply your program to test available software and hardware random number generators.
Literature:
Description:
Creating a pair of corresponding public and private keys for the RSA cryptosystem requires generation of two very large prime numbers. The minimum length of these primes used in today's RSA implementations is 256 bits, which corresponds to numbers in the range 10^75. Due to the fast progress in factorization algorithms used to break RSA, it is currently recommended to use 384, 512, or even 1024-bit primes. It may be therefore surprising, that generating such long primes takes only seconds on a SPARC or PC. This is possible by using fast probabilistic tests for primality. A single iteration of a probabilistic test indicates that either a tested number is a composite (for sure) or it is probably prime (with some probability). Several iterations of this test may decrease the probability of the test witnessing a composite number to be prime to 10^-20, 10^-40, or any other value chosen by the user. Since error probability of 10^-20, is still smaller than odds of winning the top prize in a U.S. state lottery and being killed by lightning in the same day, people usually do not worry about it. Two prime numbers used in the RSA cryptosystem are often required to have special properties that makes factorization of their product more difficult. A method for generating numbers with this properties, so called strong primes, takes only about 20% more time than generation of an arbitrary prime. Your task is to implement and optimize the procedure for generating strong primes using probabilistic Rabin-Miller algorithm, and examine time taken by your program to generate primes of different sizes.
Literature:
Description:
Testing whether a randomly selected number is prime is much easier than factoring (polynomial vs. subexponential complexity). Still, proving (with certainty 1.0) that a given large number is prime, is not easy and efficient enough to be used for generating keys in cryptographic applications. As a result two practical approaches were developed for generating random large prime numbers. First (see project KS-2) is to prove primality of the given number with some small error probability acceptable to the user (e.g. 10^-20). Second approach is to generate a prime number together with a certificate of primality, which gives an absolute certainty that the number is prime. Surprisingly, most efficient algorithms from both groups, probabilistic Miller-Rabin algorithm, and deterministic Maurer algorithm take approximately the same time to execute. Deterministic method developed by Maurer in 1989 is a recursive algorithm. The algorithm is based on the forgotten theorem due to Pocklington published as early as in 1914. The popularity of Maurer's algorithm is growing but still a vast majority of cryptographic products use probabilistic primality testing. Your task is to implement Maurer's method, and examine time taken by your program to generate primes of different sizes.
Literature:
Description:
Last year hundreds of U.S. companies started to develop their proprietary Certification Authorities. Several announced their plans to become the Central Certification Authority for U.S. or even the whole Internet. People started to ask: "Why does everyone want to be a Certification Authority? Is it because of patriotism, power, money, or maybe other reasons?" Certification Authorities (CAs) are necessary to make use of public key cryptography for commercial purposes safe, easy and efficient. Their primary function is to register public keys generated by individuals, and issue so called certificates (called also digital IDs) that bind a public key to a given person. In their simplest form, certificates contain a public key and a name signed with the CA's private key. Certificates, as opposed to public keys used alone, can be safely stored in a public directory, and send over an insecure network. They allow everyone to securely communicate (and do business) with people they have never met before, without any earlier arrangements. Creating a Certification Authority for any large organization or company requires a lot of technical (as well as administrative) decisions. You must decide what software/hardware to use, what services/algorithms/standards should be supported, how keys are generated and stored, what is a policy regarding proofs of user's identity, how long keys are valid, whether they can be revoked, to which public-key hierarchy (hierarchies) should your Certification Authority subscribe, etc. Your task is to develop a project of the Certification Authority for the whole UofR, your Department, or a large company of your choice.
Literature:
Description:
Timestamping is a method to prove that a given electronic document existed at a specific time and that none single bit of this document was modified since then. It prevents backdating of electronic documents and any form of tamparing with them. Timestamping supports the use of digital signatures which only associate the author with the document and have no mechanism to prove the time when the document was created. Actually, digital signatures can hardly be used alone for any purpose. Surprisingly, existing implementation of timestamping does not require access to the contents of the document; does not require any public, private, or secret key transformations; and can be renewed and thus remain secure almost indefinitely long. Timestamping, instead of relying on ciphers and the security of their keys, is based on keyless hash functions and a regular publication of a few lines in the Public and Commercial Notices section of the Sunday New York Times. It is very efficient, can be performed on-line and applied to any kind of data (documents, mail, graphics, images, video, etc.). Your task is to develop your own implementation of timestamping service for use within UofR, and try to integrate it with a selected application, such as electronic mail or word processor.
Literature:
Description:
Imagine that you have a fantastic product to sell over the network. You want to create a cybermall and make your product available to all cyberusers. The only thing that scares you is the security of your site and the security of payment transactions that will be conducted over the Internet. You know that there are basically two methods to deal with payments over the network: first involves the use of credit cards numbers, transmitted securely over the network; the seconds assumes the use of digital cash - electronic equivalent of paper bills and metal coins, permitting a full anonymity of the buyer. You wonder, which of these methods would be preferred by your customers. Which of them is more secure from your point of view, and easier to implement? You also know that there are several recently proposed standards for secure WWW protocol (SSL, S-HTTP), and that there exist products that already implement strong authentication and confidentiality. But are they save? Are they easy to use? Maybe it is better to write something by yourself? Your task is to investigate existing solutions to the problem of secure cybermall, choose the best one, and possible improve it, by writing your own piece of software.
Literature:
Description:
PGP (Pretty Good Privacy) is today's defacto standard for secure Internet mail. It is very well suited for individual people protecting their private correspondence, but not easily scalable to large enterprises and commercial use. PEM (Privacy Enhanced Mail) was a first attempt to develop a specification for secure mail used for commercial purposes. It has not not become very popular mainly because of the lack of public key infrastructure (Certification Authorities) it demands. The other drawback of this standard are overly restricted formats and mechanisms (7-bit text messages only, no privacy without authentication, etc.). Together with emergence of multimedia mail MIME, a new standard for secure electronic mail is needed. Two such standards have been proposed: one called MOSS (MIME Object Security Services) was published in October 1995, and developed mainly by Trusted Information Systems; the other S/MIME (Secure MIME) has been developed by the RSA Data Security, Inc. The main purpose of the standardization is to provide interoperability between e-mail packages coming from different vendors. As a result only one of this projects has a chance to succeed. Public- domain implementations of both standards have already been released, and various vendors announced release of commercial products embracing one of these standards in 1996. Your task is to analyze both proposals and their implementations, find out similarities and differences between them, and try to predict which of this standards will become more popular on the Internet.
Literature:
About S/MIME:
Description:
Firewalls are the hottest and the best selling security product in the history. A single firewall can cost as much as $100,000, and the number of their vendors is constantly growing. On the other hand, you can pretty easily construct a simple (and almost as secure) firewall for free, using public domain software toolkits. The basic idea of the firewall is to separate your internal network from the global Intenet. All traffic coming from the Internet or going out from the internal network must pass through the firewall. The firewall makes decisions whether this traffic should be permitted or blocked. By appropriatly building or configuring firewall you can block traffic from the outside to the inside (e.g., limit it to e-mail only), but permit users on the inside to communicate freely with the outside. Your task is to choose a part of the University network that should be (according to you) protected by a firewall; define what kind of traffic should be permitted; and then implement your firewall using a public domain software toolkit distributed by the Trusted Information Systems.
Literature:
Description:
To perform administrative tasks on a UNIX system, a manager first needs to gain root privileges. Traditionally this is accomplished in one of two ways: the administrator either logs into the system as root (discouraged due to lack of accountability), or she can use the 'su' program. Along with its cousins 'sudo' and 'osh', 'su' bestows root privileges when given the proper password: in the case of 'su', this is the root password; in the case of 'sudo' and 'osh', the administrator's own password is accepted if his or her userid appears in a pre-existing access control list.
Unfortunately, none of these methods scale very well or provide password security for remote administration over a network. The most serious problem is that all of these systems use reusable passwords vulnerable to eavesdropping. Another problem is convenience. It is generally considered poor practice to have the same root or user passwords on multiple machines, yet humans are not very good at remembering tens or hundreds of passwords, let alone one.
To address the problem of security I propose an authentication system based on one-time passwords; for convenience, the password database should be centralized, and the authentication system distributed in client/server form. Authentication will be done by a server program managing a single password database, and root privilege will be granted by client programs querying that server over an insecure network. The project will consist of two parts: First, I must design a secure protocol for this scheme, immune to eavesdropping and network spoofing. Second, I will attempt to write an administrative tool implementing this protocol, as a replacement for 'su' and similar programs.
Literature:
Prerequisites: UNIX services and protocols
Description:
The best algorithms for factoring large integers have subexponential complexity (i.e., complexity that is greater that any polynomial complexity and smaller than the exponential complexity). This complexity is not good enough to factor numbers above 200 decimal digits, even assuming that all computers in the world were in use for hundreds of years. In 1994, Peter Shor from AT&T Bell Labs has discovered a factorization algorithm that runs in polynomial time, permitting in theory factoring numbers of almost arbitrary size. The algorithm can be implemented, however, only using a special type of a computing machine, referred as a quantum computer. As opposed to a classical computer, where logic states are represented using bits, that is as a binary "zero" or "one," in a quantum computer, logic states are represented using qubits, that is as a superposition of classical "zero" and classical "one". Every qubit may have infinitely many different values, and thus can carry much more information than a classical bit. A quantum computer is not a general purpose machine; it was shown to be able to solve very few problems, but among those almost all computationally intensive problems public key cryptography is based on. Therefore, if ever developed, quantum computation would mean the demise of RSA and other public key cryptosystems. So far, only simple quantum gates have been shown to work based on various physical principles, and there are plenty of problems to overcome before a quantum machine that can factor even a two digit number is built. It is unclear whether such a machine if ever developed can be scaled up to a computer consisting of billions of gates required to factor numbers of the size used in today's implementations of RSA. Shor's algorithm has two phases; first, based on quantum computing, second on classical computations. The classical phase involves efficient algorithms known from number theory such as continued fraction expansion and Euclid's algorithm. The entire algorithm is probabilistic, meaning that several repetitions of one or both phases, with different initial parameters may be necessary to factor a number. Your task is to analyze the algorithm, determine the expected number of repetitions necessary to complete factorization, and estimate the number of operations and quantum gates necessary to factor a number of a given size taking into account redundancy introduced by quantum error correction codes.
Literature:
Description:
Many public key cryptosystems (such as El-Gamal encryption scheme, Diffie-Hellman key exchange, Digital Signature Standard, etc.) are based on the difficulty of computing discrete logarithms in the group GF(p) of integers modulo large prime P. The best classical algorithms for computing discrete logarithms have the same complexity as the best factorization algorithms. This means that the size of P used in implementations of these cryptoalgorithms must be as large as the size of the modulus N in RSA (above 200 decimal digits). Peter Shor who discovered a polynomial algorithm for factorization of large integers using a quantum computer (see project PA-1), has shown that quantum computations can be used as well to compute discrete logarithms. The algorithm, in general case, is more sophisticated than the algorithm for factorization, but has the same polynomial complexity. Your task is to understand the algorithm and develop a graphical representation of particular phases of the algorithm (similar to the representation used for the factorization algorithm). Additionally, you are asked to determine the expected number of repetitions of the algorithm necessary to complete computations, and estimate the number of operations and quantum gates necessary to implement the algorithm for a given size of the modulus P taking into account redundancy introduced by quantum error correction codes. The other open problem is the applicability of the Shor's algorithm to breaking elliptic curve cryptosystems.
Literature:
Description:
Quantum cryptography permits exchange of a secret key between two people who never met before. Its security is considered as almost absolute, as it is based on fundamental laws of quantum physics (in particular on the Heisenberg uncertainty principle) as opposed to unproven mathematical assumptions, such as the difficulty of factoring large integers, public key cryptography relies on. On top of this, quantum cryptography permits communicating parties to discover any attempt of an adversary to passively eavesdrop the channel, the property considered as impossible to achieve by any classical means. The idea of quantum cryptography was first invented by Stephen Wiesner in the late 1960's, but his paper written at that time had been rejected by reviewers, and waited for publication more than ten years. Rediscovered at the beginning of 1980's, quantum cryptography was still considered as a science fiction, smart but unrealistic idea of a couple of crazy physicists. Surprisingly, the first apparatus implementing quantum cryptography was built as early as in 1989. Although the first version was still primitive, permitting only communication in the open air at the distance of about one foot; subsequent versions developed by various groups in U.S., Canada, and Europe allowed communication through a standard commercial optical cable for distances in excess of 20 km Currently, it seems that the main barriers to acceptance of quantum cryptography are: the cost of quantum communication equipment, and the limited distance between communicating parties. Security of the quantum key exchange in current implementations cannot be considered as absolute either because of the discrepancy between the theoretical model of a protocol and its actual implementation (e.g., the use of faint light pulses instead of single photons). Your task is to investigate different schemes for quantum key exchange, and assess their security against various eavesdropping strategies.
Literature:
Description:
For years strong cryptography was treated as munitions and its export from U.S. was completely banned. Philip Zimmerman, the author of PGP - the most widespread program for protection of electronic mail on the Internet, has been accused of violating export restrictions by making source code of his program available by ftp. He was threatened to be sent to jail for several years and pay a huge fine, before the investigation was finally dropped under the pressure of public opinion. Similar charges threatens everybody who takes abroad a diskette accompanying the Bruce Schneier's book, even though the book itself, containing listings of programs from this diskette, can be freely published in other countries, According to the survey organized by Trusted Information Systems, there exist approximately as many foreign as domestic cryptographic products and companies. Majority of foreign products employ algorithms developed in U.S. In many cases, foreign products are of the same quality as those developed in U.S., and have the clear advantage of being allowed to be sold all over the world (including U.S.). American products must be developed in two versions, one for domestic use, with strong cryptography, and the other "international" version, with weakened cryptoalgorithms (key size limited to 40 bits). This situation has caused a growing resistance of American industry, and brought several small changes to government position during the past few years. Still, the controversy seems to be far from being resolved. A new hot dispute emerged after the government attempt to introduce a Clipper chip, a new American standard developed by NSA that provided law-enforcement agencies with the ability to conduct electronic surveillance of encrypted data. The standard was rejected by commercial sector, but its idea initiated the development of alternative key recovery schemes. Currently, the issues of export control and key recovery seem to be inadvertently entangled, as according to the new government position, cryptographic products that use strong cryptography are exportable under a general licensing program only if they contain a key recovery mechanism.
Literature:
Prerequisites: basics of business law and international finance
Description:
The digital signature can perform the same functions as handwritten signature, permitting financial transactions over the network between people who never met before. Nevertheless, its use is not binding according to the today's law. Utah was the first U.S. state that introduced law regarding the use of digital signatures for electronic commerce in 1995. The same year, the similar law was introduced in California. Currently, several other states prepare the legislation regarding digital signatures. This legislation does not only equate the legal status of handwritten and digital signatures. It also provides the legal bounds for the operation of Certification Authorities (see project KA-1), which register public keys used to verify signatures, and issue public key certificates (called also digital IDs) that bind a public key to a given person. Your task is to analyze the current state of legislation regarding digital signatures and liability of Certification Authorities.
Literature:
Description:
Elliptic curves is a new emerging class of public key cryptosystems that may successfully compete in the future with the current monopoly of the RSA cryptosystem. The security of most elliptic curve cryptosystems is based upon the difficulty of solving a discrete logarithm problem in a group of points on the elliptic curve. If the elliptic curve is chosen correctly, the best known algorithm for finding the discrete logarithm is of exponential difficulty. The security of RSA is based on the difficulty of factoring large integers, a problem for which quite efficient subexponential algorithms (such as Number Field Sieve) have been developed (see project CS-3). As a result, the cryptographic keys may be significantly shorter for elliptic curve cryptosystems compared to RSA, which results in faster and more compact implementations. The other advantage of using elliptic curves for constructing cryptosystems is that each user may (but does not need to) choose his/her own curve. If this is the case, reconstructing the private key of one user, does not provide any information that could be used to reconstruct the private key of another user. According to the current knowledge, choosing an appropriate elliptic curve is equivalent to calculating the number of points on the elliptic curve, and checking whether this number has a large prime divisor. Only in this case, a discrete logarithm problem in a group of points on the elliptic curve has exponential complexity. Several methods have been devised for choosing an appropriate elliptic curve. The anex to the IEEE 1363 standard lists three of them: counting number of points on a randomly chosen curve using Schoof's algorithm, constructing the curve using the Weil theorem, and constructing the curve using the method of complex multiplication. Your task is to either perform a comparative analysis of the last two methods, or to prepare a detailed specification of the Schoof's algorithm, and try to estimate its running time.
Literature:
Description:
Elliptic curves for cryptographic applications may be constructed over fields GF(p), where p is a large prime, and GF(2^m) where m is some integer. Additionally, the operations in the field GF(2^m), addition, multiplication, division, and remainder, can be defined in various ways, leading to equivalent (isomorphic) fields. Choosing these operations affects the efficiency and complexity of the hardware and software implementations of elliptic curve cryptosystems. Two ways of defining GF(2^m) given in the IEEE P1363 standard use so called polynomial and optimal normal bases. Choice of any of these bases leads to hardware implementation that is more efficient than hardware implementation of operations in GF(p). Your task is to analyze and compare architectures used to implement multiplication and inversion in the fields GF(2^m) with polynomial and optimal normal bases, and estimate the running times for these operations.
Literature: