Encryption algorithm parameters. Types of encryption algorithms

Basic modern encryption methods

Among the various encryption methods, the following main methods can be distinguished:

  • - Replacement or substitution algorithms - characters of the source text are replaced with characters of another (or the same) alphabet in accordance with a predetermined scheme, which will be the key of this cipher. Separately, this method is practically not used in modern cryptosystems due to its extremely low cryptographic strength.
  • - Rearrangement algorithms - characters of the original text are swapped according to a certain principle, which is the secret key. The permutation algorithm itself has low cryptographic strength, but is included as an element in many modern cryptosystems.
  • - Gamma algorithms - the characters of the source text are added to the characters of a certain random sequence.
  • - Algorithms based on complex mathematical transformations of the source text according to a certain formula. Many of them use unsolved math problems. For example, the RSA encryption algorithm widely used on the Internet is based on the properties prime numbers.
  • - Combined methods. Sequential encryption of the source text using two or more methods.

Let's take a closer look at algorithms built on complex mathematical transformations and combined methods, which are the most commonly used to protect data in modern information systems.

Algorithms based on complex mathematical transformations

RSA algorithm

The RSA algorithm (after the first letters of the last names of its creators Rivest - Shamir - Adleman) is based on the properties of prime numbers (and very large ones). Prime numbers are those numbers that have no divisors other than themselves and one. And coprime numbers are those numbers that have no common divisors other than 1.

First, you need to choose two very large primes (large primes are needed to construct large, strong keys. For example, the Unix program ssh-keygen generates 1024-bit keys by default). As a result of multiplying p and q, the parameter n is determined. Then a random number e is selected, and it must be relatively prime with the number (n) = (p - 1)*(q - 1). A number d is found for which the relation is true

(e*d) mod (n) = 1.

Mod is the remainder of division, i.e. if e multiplied by d is divided by (n), then the remainder should be 1. In other words, the numbers (e*d - 1) and (n) should be divisible by an integer.

The public key is a pair of numbers e and n, and the private key is d and n. When encrypting, the source text is treated as a number series, and an operation is performed on each number that must be less than n

C(i) = (M(i) e) mod n. (1)

The result is the sequence C(i), which constitutes the cryptotext. Decoding of information occurs according to the formula

M(i) = (C(i) d) mod n. (2)

As you can see, decryption requires knowledge of the secret key.

Let's look at an example using small numbers. Let p = 3, q ​​= 7. Then n = = p*q = 21. Let us choose e = 5. From the formula (d*5) mod 12 = 1 we calculate d = 17. Therefore, the public key is 17, 21, the secret key is 5, 21.

Let's encrypt the sequence "2345":

C 1 = 2 17 mod 21 = 11;

C 2 = 3 17 mod 21 = 12;

C 3 = 4 17 mod 21 = 16;

C 4 = 5 17 mod 21 = 17.

Cryptotext - 11 12 16 17. Let's check the decryption:

M 1 = 11 5 mod 21 = 2;

M 2 = 12 5 mod 21 = 3;

M 3 = 16 5 mod 21 = 4;

M 4 = 17 5 mod 21 = 5;

As you can see, the result coincided with the original plaintext.

The RSA cryptosystem is widely used on the Internet. When users connect to a secure server using the Secure Socket Layer (SSL) protocol, Secure Socket Layer is a protocol that guarantees the secure transmission of data over a network; combines a cryptographic system with public key and block data encryption., installs a WebMoney certificate on your PC or connects to a remote server using Open SSH or SecureShell, most do not even suspect that all these programs use public key encryption using the ideas of the RSA algorithm.

Is this system really that reliable?

Since its creation, RSA has been constantly subject to brute-force attacks (brute force attack) - an attack carried out by simply trying all possible or most frequently occurring keys (passwords). In the second case, brute force is quite common called a "dictionary attack". In 1978, the authors of the algorithm published an article where they presented a string encrypted using the method they had just invented. The first person to decipher the message was given a reward of $100, but this required dividing a 129-digit number into two factors. This was the first competition to crack RSA. The problem was solved only 17 years after the publication of the article.

RSA's cryptographic strength is based on the assumption that it is extremely difficult, if not impossible, to determine the private key from the public key. To do this, it was necessary to solve the problem of the existence of divisors of a huge integer. Until now, no one has solved it using analytical methods, and the RSA algorithm can only be cracked through brute force. Strictly speaking, the assertion that the factorization problem is difficult and that breaking the RSA system is difficult is also not proven.

The RSA company (http://www.rsa.ru) regularly holds competitions for cracking its own (and not only its own) ciphers. Previous competitions were won by the organization Distributed.net (http://www.distributed.net), which is an Internet community of volunteers.

Distributed.net members download a small client program to their PC, which connects to the central server and receives a piece of data for calculations. Then all data is uploaded to the central server, and the client receives the next block of initial information. And this happens until all combinations have been sorted out. Users, participants in the system, are united into teams, and the site maintains ratings of both teams and countries. For example, Distributed.net, a participant in the RC5-64 (RSA block cipher using a 64-bit key) cracking competition, managed to crack it after five years (1,757 days). During this time, 327,856 users participated in the project and more than 15,268 * 10 18 key options were sorted out. It turned out that the phrase “some things are better left unread” was encrypted (not without humor). General recommendations according to the RC5-64 cipher are as follows: the algorithm is strong enough for everyday needs, but it is not recommended to encrypt data with it that remains secret for more than five years.”

Probabilistic encryption

One type of public key cryptosystem is probabilistic encryption, developed by Shafi Golwasser and Silvio Minnelli. Its essence is to subordinate the encryption algorithm E to probabilistic models. What are the advantages of this approach? For example, in the RSA system, 0 and 1 are not “masked.” This problem is successfully solved by probabilistic algorithms, since they match the plaintext M not just with the cryptotext C, but with some element from the set of cryptotexts CM. Moreover, each element of this set is selected with a certain probability. In other words, for any plaintext M, the result of the algorithm E will be a random variable. It may seem that in this case it will be impossible to decrypt the information, but this is not at all the case. In order to make decryption possible, it is necessary that for different plaintexts M 1 and M 2, the sets CM 1 and CM 2 do not intersect. I would also like to say that probabilistic encryption algorithms are more reliable than deterministic ones. In this area, the most common are probabilistic encryption based on RSA functions and the El-Gamala cryptosystem.

Combined encryption methods

One of the most important requirements for an encryption system is its high cryptographic strength. However, increasing it for any encryption method leads, as a rule, to a significant complication of the encryption process itself and an increase in resource costs (time, hardware, reduction bandwidth etc.), and as a consequence - the operating time of cryptographic systems.

Enough effective means increasing the strength of encryption is the combined use of several different encryption methods, i.e. sequential encryption of the original text using two or more methods.

As studies have shown, the strength of combined encryption is not lower than the product of the strengths of the methods used.

Strictly speaking, you can combine any encryption methods and in any quantity, but in practice the following combinations are most widespread:

substitution + gamma;

rearrangement + gamma;

gamming + gamming;

substitution + permutation;

A typical example of a combination cipher is the US National Data Encryption Standard (DES).

DES cryptographic standard

In 1973, the US National Bureau of Standards began developing a program to create a standard for computer data encryption. A competition was announced among development companies, which was won by IBM, which in 1974 presented an encryption algorithm known as DES (Data Encryption Standard).

In this algorithm, input 64-bit vectors, called plaintext blocks, are converted into output 64-bit vectors, called ciphertext blocks, using a binary 56-bit key K. The number of distinct keys of the DES algorithm is 2 56 .

The algorithm is implemented over 16 similar encryption cycles, where the i-th cycle uses the cyclic key Ki, which is an algorithmically generated sample of 48 of the 56 bits of the key Ki, i = 1,2,...,16.

The algorithm provides high security, but recent results have shown that current technology can create a computing device costing about $1 million that can crack a secret key using brute force in an average of 3.5 hours.

Due to the small key size, it was decided to use the DES algorithm to close commercial information. The practical implementation of enumeration of all keys under these conditions is not economically feasible, since the costs of enumeration do not correspond to the value of the information hidden by the cipher.

The DES algorithm was the first example of widespread production and implementation of technical means in the field of information security. The US National Bureau of Standards tests hardware implementations of the DES algorithm proposed by development companies on a special testing bench. Only after positive test results, the manufacturer receives a certificate from the National Bureau of Standards for the right to sell its product. To date, several dozen products made on various element bases have been certified.

High encryption speed has been achieved. It is 45 Mbit/s in the best products. Some hardware products cost less than $100.

The main areas of application of the DES algorithm:

storing data on computers (encrypting files, passwords);

message authentication (having a message and a control group, it is easy to verify the authenticity of the message;

electronic payment system (for transactions with a wide clientele and between banks);

Electronic exchange of commercial information (data exchange between buyers, seller and banker is protected from changes and interception.

Later, a modification of DES appeared - Triple DES (“triple DES” - since it encrypts information three times with the “regular” DES algorithm), free from the main drawback of the previous version - a short key; it is twice as long here. But, as it turned out, Triple DES inherited other weaknesses of its predecessor: the lack of parallel computing capabilities for encryption and low speed.

GOST 28147-89

In 1989, the USSR developed a block cipher for use as state standard data encryption. The development was accepted and registered as GOST 28147-89. The algorithm was introduced in 1990. And although the scope of application of this encryption algorithm is still being clarified, the beginning of its implementation, in particular in banking system, it's already done. The algorithm is somewhat slow, but has very high cryptographic strength.

IN general outline GOST 28147-89 is similar to DES. The block diagram of the GOST algorithm differs from the block diagram of the DES algorithm only in the absence of an initial permutation and the number of encryption cycles (32 in GOST versus 16 in the DES algorithm).

The GOST algorithm key is an array consisting of 32-dimensional vectors X 1, X 2,…X 8. The cyclic key of the i-th cycle K i is equal to Xs, where the series of values ​​i from 1 to 32 corresponds to the following series of values ​​s:

1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8,8,7,6,5,4,3,2,1.

The GOST cipher uses a 256-bit key and the volume of the key space is 2,256. None of the general-purpose computer systems currently existing or expected to be implemented in the near future can be used to select a key in less than many hundreds of years. Russian standard was designed with a large margin, its durability is many orders of magnitude superior to the American DES standard with its actual key size of 56 bits and a key space volume of only 2 56 , which is clearly not enough. The GOST cryptographic algorithm key is 32 bytes (256 bits) long and four times larger than the DES key. The time required to sort through all the keys increases not by four times, but by 256 32-8 = 256 24, which translates into astronomical figures). In this regard, DES may be of research or scientific interest rather than practical interest.

Conclusions on the use of modern encryption algorithms

Currently, the three main encryption standards most commonly used are:

  • - DES;
  • - GOST 28147-89 - a domestic method, characterized by high cryptographic resistance;
  • - RSA is a system in which encryption and decryption are carried out using different keys.

The disadvantage of RSA is the rather low encryption speed, but it provides a personal electronic signature based on a secret key unique to each user. The characteristics of the most popular encryption methods are shown in Table 1.

Table 1 Characteristics of the most common encryption methods

Due to the fact that the main function of our software is data encryption, we are often asked questions regarding certain aspects of cryptography. We decided to collect the most frequently asked questions into one document and tried to give them the most detailed, but at the same time, not overloaded with unnecessary information answers.

1. What is cryptography?

Cryptography is a theoretical scientific discipline, a branch of mathematics that studies the transformation of information in order to protect it from intelligent actions of the enemy.

2. What is an encryption algorithm?

An encryption algorithm is a set of logical rules that determine the process of converting information from an open state to an encrypted one (encryption) and, conversely, from an encrypted state to an open one (decryption).

Encryption algorithms appear as a result of theoretical research, both by individual scientists and scientific teams.

3. How does encryption protect data?

The basic principle of protecting data using encryption is to encrypt the data. To an outsider, encrypted data looks like “information garbage” - a meaningless set of characters. Thus, if the information gets into the hands of an attacker in encrypted form, he simply will not be able to use it.

4. Which encryption algorithm is the strongest?

In principle, any encryption algorithm proposed by any known expert in the field of cryptography is considered strong until proven otherwise.

As a rule, all newly emerging encryption algorithms are published for public consumption and are comprehensively studied in specialized cryptographic research centers. The results of such studies are also published for public consumption.

5. What is an encryption key?

An encryption key is a random, pseudo-random or specially formed sequence of bits, which is a variable parameter of the encryption algorithm.

In other words, if you encrypt the same information with the same algorithm, but with different keys, the results will also be different.

An encryption key has one essential characteristic - its length, which is usually measured in bits.

6. What are the encryption algorithms?

Encryption algorithms are divided into two large classes - symmetric and asymmetric (or non-symmetric).

Symmetric encryption algorithms use the same key to encrypt information and to decrypt it. In this case, the encryption key must be secret.

Symmetric encryption algorithms are usually easy to implement and do not require a lot of computing resources to operate. However, the inconvenience of such algorithms appears in cases where, for example, two users need to exchange keys. In this case, users either need to directly meet each other, or have some kind of reliable, interception-protected channel for sending the key, which is not always possible.

Examples of symmetric encryption algorithms are DES, RC4, RC5, AES, CAST.

Asymmetric encryption algorithms use two keys—one for encryption and one for decryption. In this case, we talk about a key pair. One key of the pair can be public (accessible to everyone), the other can be secret.

Asymmetric encryption algorithms are more complex to implement and more demanding on computing resources than symmetric ones, however, the problem of exchanging keys between two users is easier to solve.

Each user can create their own key pair and send the public key to their subscriber. This key can only encrypt data; decryption requires a secret key, which is kept only by its owner. Thus, an attacker obtaining a public key will not give him anything, since it is impossible for him to decrypt the encrypted data.

Examples of asymmetric encryption algorithms are RSA, El-Gamal.

7. How are encryption algorithms cracked?

In cryptographic science there is a subsection - cryptanalysis, which studies the issues of breaking encryption algorithms, that is, obtaining open information from encrypted information without an encryption key.

There are many different cryptanalysis techniques and techniques, most of which are too complex and lengthy to reproduce here.

The only method that is worth mentioning is the method of directly enumerating all possible encryption key values ​​(also called the “brute force” method). The essence of this method is to search through all possible values ​​of the encryption key until the desired key is selected.

8. What should be the length of the encryption key?

Today, for symmetric encryption algorithms, 128 bits (16 bytes) is considered a sufficient encryption key length. To completely enumerate all possible 128-bit keys (brute force attack) in one year, you need 4.2x1022 processors with a capacity of 256 million encryption operations per second. The cost of this number of processors is 3.5x1024 US dollars (according to Bruce Schneier, Applied Cryptography).

There is an international project distributed.net, the purpose of which is to unite Internet users to create a virtual distributed supercomputer that brute-forces encryption keys. The latest 64-bit key cracking project was completed within 1,757 days, involved more than three hundred thousand users, and the computing power of all project computers was equivalent to almost 50,000 AMD Athlon XP processors with a clock speed of 2 GHz.

It should be taken into account that increasing the length of the encryption key by one bit doubles the number of key values, and, consequently, the search time. That is, based on the above figures, in 1757 * 2 days you can crack not a 128-bit key, as it might seem at first glance, but only a 65-bit one.

9. I have heard about encryption keys of 1024 and even 2048 bits, but you say that 128 bits is enough. What does it mean?

That's right, encryption keys of 512, 1024 and 2048 bits, and sometimes longer, are used in asymmetric encryption algorithms. They use completely different principles from symmetric algorithms, so the encryption key scales are also different.

The answer to this question is the most guarded secret of the intelligence services of any state. From a theoretical point of view, it is impossible to read data encrypted using a known algorithm with a key of sufficient length (see previous questions), however, who knows what is hidden behind the veil of state secrets? It may well turn out that there are some alien technologies known to the government that can be used to break any code :)

The only thing that can be said with confidence is that not a single state, not a single intelligence service will reveal this secret, and even if it is possible to somehow decipher the data, it will never show this in any way.

To illustrate this statement, a historical example can be given. During the Second World War, British Prime Minister Winston Churchill, as a result of intercepting and deciphering German messages, became aware of the upcoming bombing of the city of Coventry. Despite this, he took no measures to prevent the enemy from learning that British intelligence could decipher their messages. As a result, on the night of November 14-15, 1940, Coventry was destroyed by German aircraft, killing a large number of civilians. Thus, for Churchill, the cost of disclosing information that he could decipher German messages was higher than the cost of several thousand human lives.

Obviously, for modern politicians the price of such information is even higher, so we will not learn anything about the capabilities of modern intelligence services, either explicitly or indirectly. So even if the answer to this question is yes, this possibility will most likely not manifest itself.

Source: SecurIT

^ back to the beginning ^

Typically, new encryption algorithms are published for public consumption and studied in specialized research centers. The results of such studies are also published for public consumption.

Symmetric Algorithms
Encryption algorithms are divided into two large classes: symmetric (AES, GOST, Blowfish, CAST, DES) and asymmetric (RSA, El-Gamal). Symmetric encryption algorithms use the same key to encrypt information and to decrypt it, while asymmetric algorithms use two keys—one for encryption and one for decryption.

If encrypted information needs to be transferred to another location, then the decryption key must also be transferred. The weak point here is the data transmission channel - if it is not secure or is tapped, then the decryption key can fall into the hands of an attacker. Systems based on asymmetric algorithms do not have this drawback. Since each participant in such a system has a pair of keys: a Public and a Secret Key.

Encryption key
This is a random or specially created sequence of bits from a password, which is a variable parameter of the encryption algorithm.
If you encrypt the same data with the same algorithm, but with different keys, the results will also be different.

Typically, in encryption programs (WinRAR, Rohos, etc.), the key is created from a password specified by the user.

The encryption key comes in different lengths, which are usually measured in bits. As the key length increases, the theoretical strength of the cipher increases. In practice this is not always true.

In cryptography, the encryption mechanism is considered to be an unclassified quantity, and an attacker can have the full source code of the encryption algorithm as well as the ciphertext (Kerkhoff's rule). Another assumption that may occur is that an attacker may know part of the unencrypted (plain) text.

Strength of the encryption algorithm.
The encryption algorithm is considered strong until proven otherwise. Thus, if an encryption algorithm has been published, has existed for more than 5 years, and no serious vulnerabilities have been found for it, we can assume that its strength is suitable for protecting classified information.

Theoretical and practical durability.
In 1949 K.E. Shannon published the article "The Theory of Communications in Secret Systems." Shannon considered the strength of cryptographic systems as Practical and Theoretical. The conclusion regarding theoretical strength is still pessimistic: the length of the key should be equal to the length of the plaintext.
Therefore, Shannon also considered the issue of the practical strength of cryptographic systems. Is the system reliable if the attacker has limited time and computing resources to analyze the intercepted messages?

Typically, vulnerabilities are found in programs that encrypt data using some algorithm. In this case, programmers make an error in the program logic or in the cryptographic protocol, due to which, by studying how the program works (at a low level), they can ultimately gain access to secret information.

Hacking the encryption algorithm
A cryptosystem is considered to be compromised if the attacker can calculate the secret key and also perform a conversion algorithm equivalent to the original cryptoalgorithm. And for this algorithm to be executable in real time.

Cryptology has a subsection - cryptanalysis, which studies the issues of hacking or forging encrypted messages. There are many ways and methods of cryptanalysis. The most popular is the method of directly searching through all possible values ​​of the encryption key (the so-called “brute force” method). The essence of this method is to search through all possible values ​​of the encryption key until the desired key is selected.

In practice, this means that the attacker must:

  • Have at your disposal a cryptosystem (i.e. a program) and examples of encrypted messages.
  • Understand the cryptographic protocol. In other words, how the program encrypts data.
  • Develop and implement a Key search algorithm for this cryptosystem.

How to determine if the key is correct or not?
It all depends on the specific program and the implementation of the encryption protocol. Usually, if after decryption the result is “garbage”, then this is an incorrect Key. And if the text is more or less meaningful (this can be checked), then the Key is correct.

Encryption algorithms
AES (Rijndael). Currently the US federal encryption standard.

Which encryption algorithm should I choose to protect information?

Approved as a standard by the Department of Commerce on December 4, 2001. The decision came into force from the moment of publication in the Federal Register (12/06/01). A cipher version with only a block size of 128 bits has been adopted as a standard.

GOST 28147-8. Standard Russian Federation for encryption and data protection. Initially it had a rating (OB or SS - it is not known exactly), then the rating was successively lowered, and by the time the algorithm was officially carried out through the USSR State Standard in 1989, it was removed. The algorithm remains chipboard (as you know, chipboard is not considered a fingerboard). In 1989 it became the official standard of the USSR, and later, after the collapse of the USSR, the federal standard of the Russian Federation.

Blowfish The complex scheme for generating key elements significantly complicates a brute-force attack on the algorithm, but makes it unsuitable for use in systems where the key changes frequently and small amounts of data are encrypted on each key.

The algorithm is best suited for systems in which large amounts of data are encrypted using the same key.

DES US Federal Encryption Standard 1977-2001. Adopted as a US federal standard in 1977. In December 2001, it lost its status due to the introduction of a new standard.

CAST In a sense, an analogue of DES.

www.codenet.ru/progr/alg/enc
Encryption algorithms, Review, information, comparison.

http://www.enlight.ru/crypto
Materials on asymmetric encryption, digital signatures and other “modern” cryptographic systems.

Alexander Velikanov,
Olga Cheban,
Tesline-Service SRL.

Former Abu Dhabi banker Mohammad Ghaith bin Mahah Al Mazroui has developed a code that he claims is impossible to crack. The cipher, called the Abu Dhabi Code, is based on a group of symbols invented by Al Mazrui himself. In his code, each letter is replaced by a specially invented symbol, and these symbols do not belong to any known language in the world.

Which data encryption algorithms are more secure?

It took the developer a year and a half to work on the cipher, which Al Mazroui calls “completely new.”

According to the enthusiast, anyone can create their own code, and the complexity of the cipher is determined by the length of its key. It is believed that, in principle, if you have the desire, certain skills and appropriate software, almost every, even the most complex cipher can be cracked.

However, Al Mazrui assures that his creation is unbreakable and is the most reliable cipher today. “It is almost impossible to decipher a document encoded with the Abu Dhabi Code,” Al Mazrouei is sure.

To prove that he was right, the banker challenged all the outstanding cryptographers, hackers and cryptographers, urging them to try to crack his code.

3. Kryptos is a sculpture that American sculptor James Sanborn installed on the grounds of the CIA headquarters in Langley, Virginia, in 1990. The encrypted message written on it still cannot be deciphered.

4. Code printed on chinese gold bar. Seven gold bars were allegedly issued to General Wang in Shanghai in 1933. They contain pictures, Chinese writing and some encrypted messages, including in Latin letters. They may contain certificates of authenticity of the metal issued by one of the US banks.

Which encryption algorithm to choose in TrueCrypt

5. Bale cryptograms- three encrypted messages believed to contain information about the whereabouts of a treasure of two wagons of gold, silver and precious stones, buried in the 1820s near Lynchburg, in Bedford County, Virginia, by a party of gold miners led by Thomas Jefferson Bale. The price of the treasure, which has not been found until now, in terms of modern money, should be about 30 million dollars. The mystery of the cryptograms has not yet been solved; in particular, the question of the real existence of the treasure remains controversial. One of the messages has been deciphered - it describes the treasure itself and gives general indications of its location. The remaining undiscovered letters may contain the exact location of the bookmark and a list of owners of the treasure. (detailed information)

6. Voynich manuscript, which is often called the most mysterious book in the world. The manuscript uses a unique alphabet, has about 250 pages and includes drawings depicting unknown flowers, naked nymphs and astrological symbols. It first appeared at the end of the 16th century, when Holy Roman Emperor Rudolf II bought it in Prague from an unknown merchant for 600 ducats (about 3.5 kg of gold, today more than 50 thousand dollars). From Rudolph II the book passed to nobles and scientists, and at the end of the 17th century it disappeared. The manuscript reappeared around 1912, when it was purchased by the American bookseller Wilfrid Voynich. After his death, the manuscript was donated to Yale University. British scientist Gordon Wragg believes that the book is a clever hoax. The text contains features that are not characteristic of any language. On the other hand, some features, such as the length of words and the way letters and syllables are connected, are similar to those existing in real languages. “Many people think it's too complicated to be a hoax; it would take some mad alchemist years to build,” says Rugg. However, Rugg shows that such complexity could be achieved easily by using a encryption device invented around 1550 called Cardan's reticle. In this symbol table, words are created by moving a card with holes cut in it. The spaces left in the table result in words of different lengths. By superimposing such lattices on the manuscript's syllable table, Rugg created a language that shares many, if not all, of the features of the manuscript's language. According to him, it would take three months to create the entire book. (detailed information, Wikipedia)

7. Dorabella Cipher, composed in 1897 by British composer Sir Edward William Elgar. He sent a letter in encrypted form to the city of Wolverhampton to his friend Dora Penny, the 22-year-old daughter of Alfred Penny, rector of St. Peter's Cathedral. This code remains unsolved.

8. Until recently, the list also included chaocipher, which could not be revealed during the lifetime of its creator. The cipher was invented by John F. Byrne in 1918, and for almost 40 years he unsuccessfully tried to interest the US authorities in it. The inventor offered a cash reward to anyone who could solve his code, but as a result, no one applied for it.

But in May 2010, Byrne's family members handed over all his remaining documents to the National Cryptography Museum in Maryland, which led to the disclosure of the algorithm.

9. D'Agapeyeff cipher. In 1939, British cartographer of Russian origin Alexander D'Agapeyeff published a book on the basics of cryptography, Codes and Ciphers, in the first edition of which he presented a cipher of his own invention. This code was not included in subsequent editions. Subsequently, D'Agapeyeff admitted that he had forgotten the algorithm for breaking this cipher. It is suspected that the failures that befell everyone who tried to decipher his work were caused by the fact that the author made mistakes when encrypting the text.

But in our time, there is hope that the cipher can be broken using modern methods - for example, a genetic algorithm.

10. Taman Shud. On December 1, 1948, the dead body of a man was found on the Australian coast at Somerton, near Adelaide, dressed in a sweater and coat, despite a typically hot day for the Australian climate. No documents were found on him. Attempts to compare the prints of his teeth and fingers with the available data on living people also led to nothing. A pathological examination revealed an unnatural rush of blood, which filled, in particular, his abdominal cavity, as well as an increase in internal organs, but no foreign substances were found in his body. At the same time, a suitcase was found at the railway station that could have belonged to the deceased. In the suitcase were trousers with a secret pocket, in which they found a piece of paper torn from a book with words printed on it. Taman Shud. The investigation established that the piece of paper was torn from a very rare copy of the collection “Rubai” by the great Persian poet Omar Khayyam. The book itself was found in the back seat of a car, left unlocked. On the back cover of the book there were five lines carelessly scrawled in capital letters - the meaning of this message could not be deciphered. To this day, this story remains one of Australia's most mysterious mysteries.


Encryption is the most widely used cryptographic method for maintaining the confidentiality of information; it protects data from unauthorized access. First, let's look at the basic methods of cryptographic information protection. In a word, cryptography- the science of information security using mathematical methods. There is also a science opposite to cryptography and dedicated to methods of opening protected information - cryptanalysis. The combination of cryptography and cryptanalysis is commonly called cryptology. Cryptographic methods can be classified in various ways, but most often they are divided depending on the number of keys used in the corresponding cryptographic algorithms (see Fig. 1):

  1. Keyless, which does not use any keys.
  2. Single-key - they use some additional key parameter - usually a secret key.
  3. Two-key, using two keys in their calculations: secret and public.

Rice. 1. Cryptoalgorithms

Overview of Cryptographic Methods

Encryption is the main method of protection; Let's look at it in detail below.

It is worth saying a few words about other cryptographic methods:

  1. An electronic signature is used to confirm the integrity and authorship of data. Data integrity means that the data has not been accidentally or intentionally altered during storage or transmission.
    Electronic signature algorithms use two types of keys:
    • the secret key is used to calculate the electronic signature;
    • the public key is used to verify it.
    When using a cryptographically strong electronic signature algorithm and when properly storing and using the secret key (that is, when it is impossible for anyone other than its owner to use the key), no one else is able to calculate the correct electronic signature of any electronic document.
  2. Authentication allows you to verify that the user (or remote computer) really is who he says he is. The simplest authentication scheme is a password one - a password is used as a secret element, which is presented by the user when checking it. Such a scheme has been proven to be weak if special administrative and technical measures are not applied to strengthen it. And based on encryption or hashing (see below), you can build really strong user authentication schemes.
  3. There are various cryptographic checksumming methods:
    • key and keyless hashing;
    • calculation of imitation prefixes;
    • use of message authentication codes.
    In fact, all of these methods, in various ways, calculate from data of arbitrary size, with or without a secret key, a checksum of a fixed size that uniquely matches the original data.
    Such cryptographic checksumming is widely used in various information security methods, for example:
    • to confirm the integrity of any data in cases where the use of an electronic signature is impossible (for example, due to high resource consumption) or is redundant;
    • in the electronic signature schemes themselves, it is usually the hash of the data that is “signed”, and not the entire data;
    • in various user authentication schemes.
  4. Random and pseudo-random number generators allow you to create sequences of random numbers that are widely used in cryptography, in particular:
    • random numbers are needed to generate secret keys, which, ideally, should be completely random;
    • random numbers are used in many electronic signature algorithms;
    • random numbers are used in many authentication schemes.
    It is not always possible to obtain absolutely random numbers - this requires high-quality hardware generators. However, based on symmetric encryption algorithms, it is possible to build high-quality pseudo-random number generators.
Encryption

Encryption information is the transformation of open information into encrypted information (which is most often called ciphertext or cryptogram), and vice versa. The first part of this process is called encryption, second - decryption.

You can think of encryption as the following formula:

C = E k1 (M),

Where:
M(message) - open information,
WITH(cipher text) - the ciphertext obtained as a result of encryption,
E(encryption) - an encryption function that performs cryptographic transformations over M,
k1(key) - function parameter E, called key encryption.

In the GOST 28147-89 standard (the standard defines the domestic symmetric encryption algorithm) the concept key is defined as follows: “A specific secret state of some parameters of a cryptographic transformation algorithm, ensuring the selection of one transformation from a set of all possible transformations for a given algorithm.”

The key can belong to a specific user or group of users and be unique for them. Information encrypted using a specific key can only be decrypted using only the same key or a key related to it by a certain relationship.

The decryption can be represented in a similar way:

M" = D k2 (C),

Where:
M"- message received as a result of decryption,
D(decryption) - decryption function; just like the encryption function, it performs cryptographic transformations on the ciphertext,
k2- decryption key.

To obtain the correct plaintext as a result of decryption (that is, the same one that was previously encrypted: M" = M), the following conditions must be simultaneously met:

  1. The decryption function must match the encryption function.
  2. The decryption key must match the encryption key.

If the correct key is missing k2 get original message M" = M using the correct function D impossible. Under the word "impossible" in in this case it is usually understood that it is impossible to calculate in real time with existing computing resources.

Encryption algorithms can be divided into two categories (see Fig. 1):

  1. Symmetric encryption algorithms.
  2. Asymmetric encryption algorithms.

In algorithms symmetric encryption For decryption, the same key is usually used as for encryption, or a key related to it by some simple relationship. The latter is much less common, especially in modern encryption algorithms. Such a key (common for encryption and decryption) is usually called simply encryption key.

IN asymmetric encryption encryption key k1 easily calculated from the key k2 in such a way that reverse calculation is impossible. For example, the key relationship could be like this:

k1 = a k2 mod p,

where a and p are the parameters of the encryption algorithm, which have a sufficiently large dimension.

This key relationship is also used in electronic signature algorithms.

The main characteristic of the encryption algorithm is cryptographic strength, which determines its resistance to disclosure by cryptanalysis methods. Typically this characteristic is determined by the time interval required to crack the cipher.

Symmetric encryption is less convenient due to the fact that when transmitting encrypted information to someone, it is necessary for the recipient to receive a key in advance to decrypt the information. Asymmetric encryption does not have this problem (since the public key can be freely transmitted over the network), however, it does have its own problems, in particular the problem of public key spoofing and slow encryption speed. Most often, asymmetric encryption is used in conjunction with symmetric encryption - to transmit the symmetric encryption key, which encrypts the bulk of the data. However, key storage and transfer schemes are a topic for a separate article. Here I will allow myself to state that symmetric encryption is used much more often than asymmetric encryption, so the rest of the article will be devoted only to symmetric encryption.

There are two types of symmetric encryption:

  • Block encryption- information is divided into blocks of a fixed length (for example, 64 or 128 bits), after which these blocks are encrypted one by one. Moreover, in different encryption algorithms or even in different operating modes of the same algorithm, blocks can be encrypted independently of each other or “with chaining” - when the result of encrypting the current block of data depends on the value of the previous block or on the result of encrypting the previous block.
  • Stream encryption- necessary, first of all, in cases where information cannot be divided into blocks - say, a certain stream of data, each character of which must be encrypted and sent somewhere, without waiting for the remaining data sufficient to form the block. Therefore, stream encryption algorithms encrypt data bit by bit or character by character. Although it is worth saying that some classifications do not distinguish between block and stream encryption, considering that stream encryption is the encryption of blocks of unit length.

Let's look at what block symmetric encryption algorithms look like from the inside. Structure of encryption algorithms

The vast majority of modern encryption algorithms work in a very similar way: a certain transformation is performed on the ciphertext using the encryption key, which is repeated a certain number of times (rounds). At the same time, depending on the type of repeated transformation, encryption algorithms are usually divided into several categories. There are also various classifications here, I will give one of them. So, according to their structure, encryption algorithms are classified as follows:

  1. Algorithms based on the Feistel network.

    The Feistel network involves dividing the processed data block into several subblocks (most often into two), one of which is processed by a certain function f() and overlaps one or more other subblocks. In Fig. Figure 2 shows the most common structure of algorithms based on the Feistel network.

    Rice. 2. Structure of algorithms based on the Feistel network.

    Additional function argument f(), indicated in Fig. 2 how Ki, called round key. The round key is the result of processing the encryption key by the key expansion procedure, the task of which is to obtain the required number of keys Ki from an initial encryption key of a relatively small size (currently, a size of 128 bits is considered sufficient for a symmetric encryption key). In the simplest cases, the key expansion procedure simply splits the key into several fragments, which are used alternately in encryption rounds; Much more often, the key expansion procedure is quite complex, and the keys Ki depend on the values ​​of most bits of the original encryption key.

    The superposition of a processed subblock onto an unprocessed one is most often done using the logical XOR operation (as shown in Fig. 2). Quite often, instead of XOR, modulo addition is used here 2n, Where n- subblock size in bits. After overlay, the subblocks are swapped, that is, in the next round of the algorithm, a different subblock of data is processed.

    This structure of encryption algorithms got its name from Horst Feistel, one of the developers of the Lucifer encryption algorithm and the DES (Data Encryption Standard) algorithm developed on its basis, a former (but still widely used) US encryption standard. Both of these algorithms have a structure similar to that shown in Fig. 2. Among other algorithms based on the Feistel network, we can cite as an example the domestic encryption standard GOST 28147-89, as well as other very well-known algorithms: RC5, Blowfish, TEA, CAST-128, etc.

    Most modern encryption algorithms are based on the Feistel network - due to the many advantages of such a structure, among which the following are worth noting:

    • Algorithms based on the Feistel network can be designed in such a way that the same algorithm code can be used for encryption and decryption - the difference between these operations can only be in the order of application of the keys Ki; This property of the algorithm is most useful when implemented in hardware or on platforms with limited resources; An example of such an algorithm is GOST 28147-89.
  2. Algorithms based on the Feistel network are the most studied - a huge amount of cryptanalytic research has been devoted to such algorithms, which is an undoubted advantage both in the development of the algorithm and in its analysis.

    There is also a more complex structure of the Feistel network, an example of which is shown in Fig. 3.

    Rice. 3. Structure of the Feistel network.

    This structure is called generalized or expanded Feistel network and is used much less frequently than the traditional Feistel network. An example of such a Feistel network is the RC6 algorithm.

  3. Algorithms based permutation networks (SP network- Substitution-permutation network).

    Unlike the Feistel network, SP networks process the entire encrypted block in one round. Data processing comes down mainly to replacements (when, for example, a fragment of an input value is replaced by another fragment in accordance with a replacement table, which may depend on the key value Ki) and permutations depending on the key Ki(a simplified diagram is shown in Fig. 4).

    Rice. 4. Substitution-permutation network.

    However, such operations are also typical for other types of encryption algorithms, therefore, in my opinion, the name “substitution-permutation network” is quite arbitrary.

    SP networks are much less common than Feistel networks; Examples of SP networks include the Serpent or SAFER+ algorithms.

  4. Algorithms with structure "square"(Square).

    The “square” structure is characterized by the representation of the encrypted data block in the form of a two-dimensional byte array. Cryptographic transformations can be performed on individual bytes of an array, as well as on its rows or columns.

    The algorithm structure takes its name from the Square algorithm, which was developed in 1996 by Vincent Rijmen and Joan Daemen, the future authors of the Rijndael algorithm, which became the new US AES encryption standard after winning an open competition. The Rijndael algorithm also has a Square-like structure; also examples include the Shark algorithm (an earlier development of Ridgeman and Damen) and Crypton. The disadvantage of algorithms with a “square” structure is their lack of knowledge, which did not prevent the Rijndael algorithm from becoming the new US standard.

    Rice. 5. Rijndael algorithm.

    In Fig. Figure 5 shows an example of an operation on a data block performed by the Rijndael algorithm.

  5. Algorithms with a non-standard structure, that is, those algorithms that cannot be classified into any of the listed types. It is clear that ingenuity can be limitless, so it is difficult to classify all possible options for encryption algorithms. As an example of an algorithm with a non-standard structure, we can cite the FROG algorithm, which is unique in its structure, in each round of which there are enough complex rules modification of two bytes of encrypted data is performed (see Fig. 6).

    Rice. 6. Modification of two bytes of encrypted data.

    Strict boundaries between the structures described above are not defined, so quite often there are algorithms classified by various experts as different types of structures. For example, the CAST-256 algorithm is referred to by its author as an SP network, and many experts call it an extended Feistel network. Another example is the HPC algorithm, called the Feistel network by its author, but considered by experts to be an algorithm with a non-standard structure.

Solving the problem of determining a key by simply trying all possible options is generally impractical, except by using a very short key. Therefore, if a cryptanalyst wants to have a real chance of breaking a cipher, he must abandon brute force methods and use a different strategy. Statistical analysis using the frequency of occurrence of individual characters or combinations of characters can be used to solve many encryption schemes. To complicate the solution of the problem of breaking a cipher using statistical analysis, K. Shannon proposed two encryption concepts, called mixing (confusion) And diffusion (diffusion). Mixing is the use of substitution in which the relationship between the key and the ciphertext becomes as complex as possible. The application of this concept complicates the use of statistical analysis, which narrows the search area for the key, and decryption of even a very short cryptogram sequence requires searching through a large number of keys. In turn, diffusion is the application of such transformations that smooth out statistical differences between symbols and their combinations. As a result, the cryptanalyst's use of statistical analysis can lead to a positive result only if a sufficiently large segment of the ciphertext is intercepted.

The implementation of the goals proclaimed by these concepts is achieved through the repeated use of elementary encryption methods such as the method of substitution, permutation and scrambling.

10.4.1. Substitution method.

The simplest and longest-history is the substitution method, the essence of which is that a character in the source text is replaced by another, selected from this or another alphabet according to the rule specified by the encryption key. The location of the character in the text does not change. One of the earliest examples of the use of the staging method is Caesar cipher, which was used by Gaius Julius Caesar during his Gallic campaigns. In it, each letter of the plaintext was replaced by another, taken from the same alphabet, but cyclically shifted by a certain number of characters. The application of this encryption method is illustrated by the example presented in Fig. 10.3, in which the encryption transformation is based on the use of an alphabet with a cyclic shift of five positions.

Rice. 10.3, A )

Original text

Cryptogram

Rice. 10.3, b )

Obviously, the cipher key is the cyclic shift value. If you select a different key than the one shown in the example, the cipher will change.

Another example of a classic scheme based on the substitution method is an encryption system called Polybius square. In relation to the Russian alphabet, this scheme can be described as follows. Initially, the letters E and E are combined into one; I, J and b, b, the true meaning of which in the decrypted text is easily restored from the context. Then 30 characters of the alphabet are placed in a table of size 65, an example of filling which is shown in Fig. 10.4.

Rice. 10.4.

Encryption of any plaintext letter is carried out by specifying its address (i.e. row and column number or vice versa) in the table provided. So, for example, the word CAESAR is encrypted using a Polybius square as 52 21 23 11 41 61. It is absolutely clear that the code can be changed as a result of rearrangements of letters in the table. It should also be noted that those who attended a tour of the casemates of the Peter and Paul Fortress should remember the words of the guide about how the prisoners knocked among themselves. Obviously, their method of communication is completely covered by this encryption method.

An example of a polyalphabetic cipher is a scheme based on the so-called. progressive key of Trithemius. The basis of this encryption method is the table shown in Fig. 10.5, the lines of which are copies of the original alphabet cyclically shifted by one position. So, the first line has a zero shift, the second is cyclically shifted one position to the left, the third is two positions relative to the first line, etc.

Rice. 10.5.

One method of encryption using such a table is to use, instead of the first plaintext character, a character from the first cyclic shift of the original alphabet, located under the encrypted symbol, the second plaintext character from the string corresponding to the second cyclic shift, etc. An example of encrypting a message in a similar way is presented below (Fig. 10.6).

Plain text

Cipher text

Rice. 10.6.

Several interesting variants of ciphers based on the progressive Trithemius key are known. In one of them, called Viginère key method, a keyword is applied that specifies the lines to encrypt and decrypt each subsequent plaintext character: the first letter of the key specifies the row of the table in Fig. 10.5, with which the first character of the message is encrypted, the second letter of the key determines the table row that encrypts the second character of the plaintext, etc. Let the word “THROMB” be chosen as the key, then the message encrypted using the Viginère key can be presented as follows (Fig. 10.7). It is obvious that the key can be opened on the basis of statistical analysis of the ciphergram.

Plain text

Cipher text

Rice. 10.7.

A variation of this method is the so-called. automatic method (open) key Viginera, in which as forming key a single letter or word is used. This key provides the starting string or strings for encrypting the first or first few characters of the plaintext, similar to the example previously discussed. The plaintext characters are then used as the key to select the encryption string. In the example below, the letter “I” is used as the forming key (Fig. 10.8):

Plain text

Cipher text

Rice. 10.8.

As the example shows, the choice of encryption strings is completely determined by the content of the plaintext, i.e. Plaintext feedback is introduced into the encryption process.

Another variation of the Viginère method is automatic method (encrypted) Viginère key. It, like public key encryption, also uses a formative key and feedback. The difference is that after encryption using the generating key, each subsequent key symbol in the sequence is taken not from the plaintext, but from the resulting cryptogram. Below is an example explaining the principle of using this encryption method, in which, as before, the letter “I” is used as the forming key (Fig. 10.9):

Plain text

Cipher text

Rice. 10.9.

As can be seen from the above example, although each subsequent key symbol is determined by the preceding cryptogram symbol, it functionally depends on all previous symbols of the open message and the forming key. Consequently, there is an effect of dispersion of the statistical properties of the source text, which makes it difficult for a cryptanalyst to apply statistical analysis. The weak point of this method is that the ciphertext contains the key characters.

By current standards, Viginère encryption is not considered secure, but its main contribution is the discovery that non-repeating key sequences can be generated using either the messages themselves or functions of the messages.

An option for implementing substitution technology that sufficiently implements the concept of mixing is the following example, based on a nonlinear transformation. The stream of information bits is pre-divided into blocks of length m, with each block represented by one of different symbols. Then a lot of
symbols is shuffled so that each symbol is replaced by another symbol from this set. After the shuffle operation, the symbol again turns into m–bit block. A device that implements the described algorithm when
, shown in Fig. 10.10, where the table specifies the rule for mixing the symbols of a set of
elements.

Rice. 10.10.

It is not difficult to show that there is
various substitutions or possible models associated with them. In connection with which, at large values m the cryptanalyst's task becomes computationally almost impossible. For example, when
the number of possible substitutions is defined as
, i.e. represents an astronomical number. Obviously, with such a value m this transformation using substitution block (substitution block, S-block) can be considered to have practical secrecy. However, its practical implementation is hardly possible, since it presupposes the existence
connections.

Let us now make sure that S– block shown in Fig. 10.10, indeed carries out a nonlinear transformation, for which we use the principle of superpositions: transformation
is linear if. Let's pretend that
, A
. Then, a, whence it follows that S– the block is nonlinear.

10.4.2. Rearrangement method.

At rearrangement(or transpositions) in accordance with the key, the order of the plaintext characters changes, but the meaning of the character is preserved. Permutation ciphers are block ciphers, i.e. the source text is first divided into blocks, in which the permutation specified by the key is carried out.

The simplest implementation of this encryption method can be the previously discussed interleaving algorithm, the essence of which is to split the stream of information symbols into blocks of length
, writing it row by row into a memory matrix of size lines and columns and reading by columns. This algorithm is illustrated by an example with
in Fig. 10.11, during which the phrase is recorded X= “Exam time will start soon.” Then the output of the permutation device will receive a cryptogram of the form

Rice. 10.11.

The considered variant of the permutation method can be complicated by the introduction of keys
And
, which determine the order of writing rows and reading columns, respectively, as illustrated by the table in Fig. 10.12. The result of the transformation will look like this

Rice. 10.12.

In Fig. Figure 10.13 shows an example of a binary permutation of data (linear operation), from which it can be seen that the data is simply shuffled or rearranged. The transformation is carried out using a permutation block ( permutation block, P-block). The shuffling technology implemented by this block has one major drawback: it is vulnerable to fraudulent messages. The fraudulent message is shown in Fig. 10.13 and consists of submitting one single unit to the input with the rest being zeros, which allows one to detect one of the internal connections. If a cryptanalyst were to analyze such a scheme using a plaintext attack, he would send a sequence of similar decoy messages, shifting a single 1 by one position with each transmission. As a result of such an attack, all input and output connections will be established. This example demonstrates why the security of a circuit should not depend on its architecture.

10.4.3. Gamma method.

P Many modern telecommunication systems that use scrambling operations demonstrate attempts to approach perfect secrecy. Under scrambling refers to the process of superimposing codes of plaintext characters with codes of a random sequence of numbers, which is also called gamma (named after the letter  of the Greek alphabet, used in mathematical formulas to denote a random process). Gumming refers to stream encryption methods, when successive plaintext characters are sequentially converted into ciphergram characters, which increases the conversion speed. So, for example, a stream of information bits arrives at one input of the modulo 2 adder shown in Fig. 10.14, while on the second there is a scrambling binary sequence
. Ideally consistency
must be a random sequence with equally probable values ​​of zeros and ones. Then the output encrypted stream
will be statistically independent of the information sequence
, which means that the sufficient condition of perfect secrecy will be satisfied. It's actually a complete coincidence
is not necessary because otherwise the recipient would not be able to recover the plaintext. Indeed, restoration of the plaintext on the receiving side should be carried out according to the rule
, so that exactly the same scrambling sequence and with the same phase should be generated on the receiving side. However, due to absolute randomness
this procedure becomes impossible.

In practice, pseudo-random sequences (PRS) have found widespread use as scrambling sequences, which can be reproduced on the receiving side. In stream encryption technology, a generator based on linear shift register with feedback (linear feedback shift register(LFSR)). A typical structure of a PSP generator, shown in Fig. 10.15, includes a shift register, which consists of – private delay elements or bits having possible states and storing some field element
during a clock interval, a feedback circuit that includes multipliers of elements (states) stored in bits by constants , and adders. The formation of the PSP is described by a recurrent relation of the form

where are the coefficients
are fixed constants belonging to
, according to which each next element of the sequence is calculated based on n previous ones.

Since the number of different register states is finite (at most ) a situation is inevitable when, after a certain number of cycles, the state repeats itself in the form of one of the previously occurring ones. However, starting from some initial loading, i.e. fixed state, diagram in Fig. 10.15 will form only a single sequence determined by the mentioned recursion. Consequently, repetition of the register state leads to repetition of all subsequent generated symbols, meaning that any PRP is periodic. Moreover, in the case of a zero state of the register (the presence of zeros in all bits), an infinite degenerate sequence will always be formed, consisting of only zeros. Obviously, such a case is absolutely hopeless, so the zero state of the register must be excluded. As a result, no more remains
permissible register states, which limits the maximum possible period of the sequence to a value not greater than
.

Example 10.4.1. In Fig. 10.16, a, the implementation of a generator based on a shift register with linear feedback is presented, generating a binary pseudo-random sequence of the period
. Note that in the case of binary PSP, multiplying by one is equivalent to simply connecting the bit output to the adder. Rice. 10.16, b, illustrates successive register contents (bit states), as well as the state of the feedback output (feedback point in the diagram) when clock pulses are applied. The sequence is read in the form of successive states of the extreme n equal rank. Reading the states of other bits results in copies of the same sequence, shifted by one or two clock cycles.

At first glance, it can be assumed that the use of long-term memory bandwidth can provide fairly high security. For example, in the cellular mobile communication system of the IS-95 standard, the PSP of the period is used as a scrambling
among elementary chips. At a chip speed of 1.228810 6 symbols/sec, its period is:

Therefore, it can be assumed that since the sequence is not repeated over such a long period, it can be considered random and provide perfect secrecy. However, there is a fundamental difference between a pseudo-random sequence and a truly random sequence: a pseudo-random sequence is formed according to some algorithm. Thus, if the algorithm is known, then the sequence itself will be known. As a result of this feature, an encryption scheme using a linear feedback shift register is vulnerable to a known plaintext attack.

To determine the feedback taps, the initial state of the register and the entire sequence, the cryptanalyst only needs to have
plaintext bits and their corresponding ciphertext. Obviously, the value 2 n significantly less than the PSP period, equal to
. Let's illustrate the mentioned vulnerability with an example.

Example 10.4.2. Let the period PSP be used as a scrambling
, generated using recursion of the form

at the initial state of the register 0001. As a result, the sequence will be generated. Let us assume that a cryptanalyst, who knows nothing about the feedback structure of the PRP generator, managed to obtain
bit of the cryptogram and its open equivalent:

Then, by adding both sequences modulo 2, the cryptanalyst has at his disposal a fragment of the scrambling sequence, which shows the state of the shift register at different times. So, for example, the first four bits of the key sequence correspond to the state of the register at some point in time . If we now shift the window that allocates the quadruple bits one position to the right, then the states of the shift register will be obtained at successive moments in time
. Considering the linear structure of the feedback circuit, we can write that

Where the PSP symbol, which is generated by the feedback circuit and supplied to the input of the first digit of the register, and
determines the absence or presence i-th connection between the output of the shift register bit and the adder, i.e. feedback circuit.

By analyzing the states of the shift register at four consecutive moments in time, we can create the following system of four equations with four unknowns:

Solving this system of equations gives the following coefficient values:

Thus, having determined the feedback connection diagram of the linear register and knowing its state at the moment of time , the cryptanalyst is able to reproduce the scrambling sequence at an arbitrary point in time, and therefore is able to decrypt the intercepted cryptogram.

Generalizing the considered example to the case of an arbitrary memory shift register n, the original equation can be represented as

,

and the system of equations is written in the following matrix form

,

Where
, A
.

It can be shown that the matrix columns are linearly independent and, therefore, there is an inverse matrix
. Hence

.

Matrix inversion requires order operations, so when
we have
, that for a computer with operating speed, one operation in 1 μs will require 1 second to invert the matrix. Obviously, the weakness of the shift register is due to the linearity of the feedback.

To make it more difficult for analysts to calculate PSP elements when comparing plaintext and ciphertext fragments, feedback on the output and ciphertext is used. In Fig. 10.17 explains the principle of introducing ciphertext feedback.

Rice. 10.17. Stream encryption with feedback.

First, a preamble is transmitted, which contains information about the parameters of the generated PSP, including the value of the initial phase Z 00. For each n the generated ciphergram symbols are calculated and a new phase value is set in the generator
. Feedback makes the gamma method sensitive to cryptogram distortions. Thus, due to interference in the communication channel, some received symbols may be distorted, which will lead to the calculation of an erroneous PRP phase value and complicate further decoding, but after receiving n correct ciphertext characters, the system is restored. At the same time, such distortion can be explained by an attempt by an attacker to impose false data.

Almost all cryptographic methods used involve splitting the message into big number pieces (or characters) of a fixed size, each of which is encrypted separately, if not independently. This greatly simplifies the encryption task, since messages usually have different lengths.

There are three main encryption methods:: threading, block and using feedback.

The following four characteristic features of cryptographic methods are highlighted.

    Operations on individual bits or blocks.

    Dependence or non-dependence of the encryption function on the results of encryption of previous parts of the message.

3. Dependence or independence of the encryption of individual message characters on their position in the text. For example, with stream encryption, various signs messages are encrypted based on their position in the message. This property is called positional dependence or independence of the cipher.

4. Symmetry or asymmetry of the encryption function. This important property defines the essential difference between conventional symmetric (single-key) cryptosystems and asymmetric two-key (public key) cryptosystems. The main difference between the two is that in an asymmetric cryptosystem, knowledge of the encryption (or decryption) key is not sufficient to discover the corresponding decryption (or encryption) key.

Main characteristics of cryptosystems

cryptosystems

Operations with

bits or blocks

Dependence/independence on signs

messages

Positional dependence/independence

Symmetry/

asymmetry

In-line

encryption

does not depend

symmetrical

Block

encryption

does not depend

does not depend

symmetrical or asymmetrical

From the reverse

communication from

ciphertext

bits or blocks

does not depend

symmetrical

In a cryptosystem that has the property that the encryption function depends on the signs of the message, error propagation may occur. If, for example, a bit of the ciphertext is corrupted during transmission, then after decryption the plaintext may contain more corrupted bits. Insertion and dropout errors can also lead to catastrophic error propagation during decryption.

Stream ciphers. Stream encryption consists of adding plaintext bits modulo 2 with bits of a pseudo-random sequence.

To the benefits Stream ciphers include no error propagation, simple implementation, and high encryption speed.

Disadvantage is the need to transmit synchronization information before the message header, which must be received before any message is decrypted. This is because if two different messages are encrypted with the same key, then the same pseudo-random sequence must be used to decrypt these messages. This situation can create a dangerous threat to the cryptographic strength of the system and therefore an additional, randomly selected message key is often used, which is transmitted at the beginning of the message and is used to modify the encryption key. As a result, different messages will be encrypted using different sequences.

Stream ciphers are widely used in military systems and other similar systems to encrypt data and digitized speech signals. Until recently, such applications were predominant for this encryption method. This is explained, in particular, by the relative simplicity of designing and implementing generators of good encryption sequences. But the main factor, of course, remains the absence of error propagation in the stream cipher.

Since relatively low-quality channels are used to transmit data and voice messages in tactical communication networks, any cryptographic system that increases the already high error rate is not applicable. In such cases, it is necessary to use a cryptosystem that does not propagate errors.

However, the proliferation of errors can also be a positive thing. Let, for example, encrypted data must be transmitted over a channel with a very low probability of error (for example, 10 5) and it is very important that the data is received absolutely accurately. This is a typical situation in computer networks, where an error in a single bit can lead to catastrophic consequences, and therefore the communication channel must be very reliable. In such a situation, one mistake is as dangerous as 100 or 1000 mistakes. But 100 or 1000 errors can be found more easily than one error. Therefore, in this case, error propagation is no longer a disadvantage of the cipher.

The standard method for generating sequences for stream encryption is the method used in the DES data encryption standard in output feedback mode.

Block ciphers. For block ciphers, the plaintext is first split into blocks of equal length, then a key-dependent encryption function is used to transform the plaintext block of length T bits into a ciphertext block of the same length. An important property of block ciphers is that each bit of a ciphertext block is a function of all (or almost all) of the bits of the corresponding plaintext block, and no two plaintext blocks can be represented by the same ciphertext block. The block cipher algorithm can be used in various ways. The four encryption modes in the DES standard actually apply to any block cipher.

These modes received the following names:

    direct encryption mode, or encryption using an electronic code book (Electronic code book),

    encryption with chaining of ciphertext blocks CBC (Cipher block chaining),

    encryption with feedback from the ciphertext CFB (Cipher feedback),

    encryption with feedback from the OFB output (Output feedback).

Main advantage direct block cipher (electronic code book) is that in a well-designed block cipher system, small changes in the ciphertext will cause large and unpredictable changes in the corresponding plaintext and vice versa.

However, the use of a block cipher in this mode is associated with serious shortcomings. The first of these is that, due to the fixed nature of encryption, even with a relatively large block length, for example 50-100 bits, “dictionary” cryptanalysis is possible in a limited form.

It is clear that a block of this size may be repeated in a message due to the large amount of redundancy in typical natural language text. This could result in identical plaintext blocks of length T bits in the message will be represented by identical ciphertext blocks, giving the cryptanalyst some information about the contents of the message.

Another potential disadvantage of this cipher is error propagation (this is one of the problems with all types of ciphers except stream ciphers). Changing just one bit in a received ciphertext block will result in the entire block being decrypted incorrectly. This in turn will result in 1 to T distorted bits in the restored source text.

Due to the noted disadvantages, block ciphers are rarely used in this mode to encrypt long messages. However, in financial institutions, where messages often consist of one or two blocks, block ciphers (specifically the DES algorithm) are widely used in this simple form. Since this application involves the possibility of frequent changes of the encryption key, the probability of encrypting two identical blocks of plaintext with the same key is very small. Block ciphers are most often used in encryption systems with feedback from the ciphertext.

Education is also possible mixed (hybrid) systems of stream and block encryption using the best properties of each of these ciphers. In such systems, stream encryption is combined with pseudo-random permutations. The plaintext is first encrypted as in normal stream encryption, then the resulting ciphertext is divided into blocks of a fixed size. In each block, a pseudo-random permutation is performed under the control of a key (different permutations for individual blocks are preferred).

The order of these two operations can be reversed without affecting the basic properties of the system. The result is a cipher that does not propagate errors, but has an additional property that a stream cipher does not have. This property means that the eavesdropper does not know which plaintext bit corresponds to the ciphertext bit. This makes the encrypted message more complex and difficult to crack. But it should be noted that this is no longer a true block cipher, in which each bit of the ciphertext is a function of only one, rather than all the bits of the plaintext.

A public key cryptosystem must be a block cipher system that operates on blocks of fairly large length. This is due to the fact that a cryptanalyst who knows the public encryption key could first calculate and create a table of correspondence between plaintext and ciphertext blocks. If the length of the blocks is small (for example, 30 bits), then the number of possible blocks will not be too large (with a length of 30 bits this is 2 30 -10 9) and a complete table can be compiled, allowing instant decryption of any encrypted message using a known public key .

Many different public key cryptosystems have been proposed, the most famous of which is RSA (Rivest, Shamir, Adleman). The cryptographic strength of this system is based on the difficulty of decomposing large numbers into prime factors and the choice of two large prime numbers for the encryption and decryption keys.

It is known that the RSA algorithm cannot be used for high-speed encryption. The most optimized software implementation of this algorithm turns out to be low-speed, and several hardware implementations provide encryption speeds from 10 to 100 Kbps (using prime numbers of the order of 2 7, which seems to be the minimum length to ensure the required cryptographic strength). This means that the use of RSA for block ciphering is limited, although its use for key distribution, authentication and digital signature generation presents interesting possibilities. Some currently known public key cryptographic algorithms allow faster encryption speeds than the RSA algorithm. However, they are not that popular yet.

Encryption systems with feedback. Closed-loop encryption systems come in various practical versions. As in block cipher systems, messages are divided into a number of blocks consisting of T bits, and to convert these blocks into ciphertext blocks, which also consist of T bit, special functions are used. However, while in a block cipher such a function depends only on the key, in closed-loop ciphers it depends on both the key and one or more previous ciphertext blocks. This general definition of closed-loop encryption includes as special cases a large number of different types of practically used systems.

The use of block cipher cryptosystems with feedback gives a number of important advantages. The first and most significant is the ability to use them to detect message manipulations carried out by active eavesdroppers. This takes advantage of the fact that errors propagate, as well as the ability of such systems to easily generate a MAC message authentication code (message aithentication code). The second advantage is that CTAK ciphers, used instead of block ciphers, do not require initial synchronization. This means that if the beginning of a message is skipped upon reception, then the rest of it can be successfully decrypted (after successful reception of successive t ciphertext bit. Note also that closed-loop encryption systems are used not only to encrypt messages, but also to authenticate them.

Block cipher cryptosystems with feedback are characterized by certain disadvantages. The main one is the propagation of errors, i.e. one erroneous bit during transmission can cause from 1 to sm + i errors in the decrypted text. Thus, the requirement to increase t to increase cryptographic strength, it contradicts system requirements associated with the propagation of errors. Another disadvantage is that the design and implementation of closed-loop encryption systems is often more difficult than for stream encryption systems. Although closed-loop encryption systems of various types have been widely used for many years, there are very few specific algorithms for such systems. In most cases, published algorithms are derived from block cipher algorithms modified for special applications.

The first conclusion that can be drawn from the analysis is that most practical cryptosystems use either stream encryption or feedback encryption algorithms. Most stream cipher cryptosystems use commercial algorithms (including proprietary algorithms or proprietary algorithms) or classified government algorithms. This situation will apparently continue in the coming years.

It is also possible that most closed-loop encryption systems will be based on the use of a special variant of block cipher algorithms, in particular the most famous block cipher algorithm DES. Regarding other encryption methods, it can be said that, despite the rapid growth of publications on public key cryptosystems, only one of them, the RSA system, has stood the test of time.

But the system's algorithm has severe implementation limitations and is therefore unsuitable for some cryptographic applications. Of course, it can definitely be argued that public key cryptosystems have had a significant impact on data encryption technology. They are finding increasing use, mainly for generating digital signatures or for managing keys in conventional cryptosystems (such as key encryption).

Potential users of cryptography have the opportunity to choose between stream cipher systems and closed-loop cipher systems (possibly based on the use of block cipher algorithms). However, there are certain areas of application, such as financial transactions, where direct block encryption techniques ("electronic codebook") can be used. The choice of a cryptoalgorithm largely depends on its purpose. Some information that can guide you when choosing the type of encryption is given in the table.



Publications on the topic