The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

Abstract: In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

Introduction

Traditionally, Shannon's information theory [13] has been used to measure the information content of samples. Shannon information, as defined by Claude E. Shannon, is the degree of uncertainty or surprise associated with a given outcome in a set of possible outcomes. Shannon entropy, which is the expected value of Shannon information, is used to quantify the average information content of a discrete sample or message. It serves as a basic concept in information theory and is widely used in communication systems and data compression.
In some situations, such as repeated patterns, Shannon's original information measurement method does not give accurate results enough because it does not take into account the structure of the patterns, it only looks at certain statistical characteristics of them. To solve this problem, this paper presents a new multiscale information content calculation method based on Shannon's original principles. By refining the computational approach, our method offers a more accurate estimate of the internal information content of discrete samples, regardless of their nature.
There are several other methods for measuring the information content of patterns, such as Kolmogorov complexity [8], randomness [9], and compression complexity. The common property of these methods that they are all suitable for determining and understanding the information content of patterns with some accuracy, and therefore provide a suitable comparison basis for checking newer methods.
To verify the effectiveness of our new method, it is applied to various data sets and compared with compression algorithms. The results show that our proposed method based on Shannon information closely approximates the results measured by other methods while taking a completely different approach.

Patterns

In this study, we deal with the calculation of the internal quantitative information content of discrete patterns. From the point of view of the calculation of the information content, the nature of the object of the measurement is irrelevant. The information content of events, signals, system states, or data sequences can be calculated since their models (with finite precision) can all be represented as discrete patterns. By moving along a spatial pattern, we get a temporal pattern and vice versa. Therefore, we do not distinguish between spatial and temporal patterns. The basic markings should be as follows.
Denote M ( R ) the set of finite sequences that can be generated from the set R :
M ( R ) ={X: N + R} (1)
Let us call the finite sequence X M ( R ) a pattern:
X=[ x 1 ,..., x N ] (2)
Denote the length of the series X :
n( X ) =N (3)
Denote the set of possible values of the series X :
R= R X ={ r 1 , r 2 ,..., r K } (4)
Let f( x ) denite the number of occurrences of x R X in the series of X :
f( x ) = i=1 K [ r i =x ] (5)
Let the relative frequency of any xR element of the pattern X :
p( x ) =f( x ) /N (6)
Denote the concatenation of X 1 X 2 ... X K patterns as:
X 1 X 2 ... X K = i=1 K X i (7)

Information content

The information content can be interpreted intuitively when only the interpretable information content is examined [1]. In this study we examine the amount of the entire internal information content without interpreting it or considering the context.
The information content of a pattern can be characterized by the improbability of individual elements of the pattern (Shannon information [13]), the length of the most concise description of the pattern (Kolmogorov complexity [8]), or the degree of randomness of the pattern [9].
A fundamental difference between Shannon's and Kolmogorov's viewpoints is that Shannon considered only the probabilistic characteristics of the random source of information that created the pattern, ignoring the pattern itself. In contrast, Kolmogorov only focused on the pattern itself [5]. In their definition, Kolmogorov and Chaitin called (inaccurately) random the pattern with the maximum information content [10].
Information, complexity and randomness have such similar properties that we can reasonably assume that they are essentially approaching the same thing with different methods. It is sufficient to consider that the Shannon information, Kolmogorov complexity and randomness of a pattern consisting of identical elements are all minimal, while in the case of a true random pattern all three values are maximal, and they all assign the highest information value to data sets with maximum entropy [1].
The concepts of entropy and information are often confused [3], so it is important to mention that entropy can also be understood as such the average information content per element.
Approached intuitively, the amount of information is a function for which the following conditions are met:

  1. The information content of a pattern with zero length or consisting of identical elements is zero.
  2. The information content of the pattern consisting of repeating sections is (almost) identical to the information content of the repeating section.
  3. A pattern and its reflection have the same information content.
  4. The sum of the information content of patterns with disjoint value sets is smaller than the information content of the concatenated pattern.
  5. The information content of true random patterns is almost maximal.
Let the information content be the function I that assigns a non-negative real number to any arbitrary pattern X M ( R ) :
I: M R R + (8)
In addition, the following conditions are met:

  1. I( X ) =0| R X |<2
  2. I( i=1 KX ) =I( X )
  3. I( i=1 K X i ) =I( i=K 1 X i )
  4. | i=1 K R X i |=I( i=1 K X i ) > i=1 KI( X i )
  5. I( X ) I( X TR ) ,X M ( R ) ,|X|=| X TR | , where X TR M ( R ) is a real random pattern.
Since any pattern can be described in non-decomposable binary form, the unit of information content should be the bit.
It can be seen that for any pattern X M ( R ) , if N=n( X ) and K=|R| , then the maximum information content of X is:
I MAX ( X ) =Nlo g 2 ( K ) (9)
That is, I( X ) I MAX ( X ) for any pattern X M ( R ) . In the case of a binary pattern, I MAX ( X ) =N , the length of the pattern, which means that a maximum of N bits of information (decision) is required to describe the pattern.
If the maximum information content is known, the relative information content can be calculated:
I ( rel ) ( X ) =I( X ) / I MAX ( X ) (10)

Shannon information

In theory, Kolmogorov complexity would provide a better approximation of the information content of patterns, but it has been proven that it cannot be calculated [5], in contrast to Shannon information [13], which can be calculated efficiently, but approximates the actual information content less well. Shannon information calculates the information content of the pattern based on the expected probability of occurrence (relative frequency) of the elements of the pattern.
The Shannon information of an arbitrary pattern X M ( R ) :
I S ( X ) = i=1 N lo g 2 ( 1 p( x i ) ) ) (11)
Since the relative frequency (expected occurrence) of the elements of the pattern is only one statistical characteristic of the pattern and does not take into account the order of the elements. That's why the Shannon information often gives a very inaccurate estimate of the information content.
The value of the Shannon information is the same for all patterns of the same length whose elements have the same relative frequency. If X M ( R ) , Y M ( Q ) and |R|=|Q|=K then it holds that:
I S ( X ) = I S ( Y ) ,if{p( r 1 ) ,p( r 2 ) ,...,( r K ) }={p( q 1 ) ,p( q 2 ) ,...,( q K ) } (12)
Shannon information ignores the structure of the patterns at different scales, the laws encoded in them, and therefore overestimates the information content of patterns consisting of repeating sections.
The problem can be illustrated with a simple example. Let's calculate the Shannon entropy of the following three patterns:

  1. X A :001101101010111001110010001001000100001000010000
  2. X B :101010101010101010101010101010101010101010101010
  3. X C :111111110000000011111111000000001111111100000000
In all three cases, the set of values is R= 0,1 , the probability of each element is p( 0 ) =0.5 and p( 1 ) =0.5 , and the Shannon entropy is I S ( X ) = i=1 Nlo g 2 ( 1 p( x i ) ) =16bit , although it is obvious that the information content of the data series differs significantly. Due to its randomness, the information content of data line X A is almost 16 bits, while the information content of the other two data lines is much smaller, as they contain repeated sections. In the X B data line, for example, the 2-bit section [ 10 ] is repeated, which means that its information content is closer to 2 bits.
The problem is that in the example above, we are examining the datasets at an elementary level, and our Shannon entropy function does not take into account the larger-scale structure of the dataset, such as the presence of repeating segments longer than 1 signal. Therefore, it is obvious to develop methods that are based on the Shannon entropy, but the data series are analyzed in the entire spectrum of the resolution, in the entire frequency range, and thus provide a more accurate approximation of the information content of the data series. Countless such solutions have already been published, which can be read, for example, in the articles [2] and [6]. This article presents an additional method.

SSM information

5.1 Shannon information spectrum

Let the pattern X be partitioned by sections of length r if m=[ N/r ] :
X ( r ) =[ x 1 ... x r , x r+1 ... x 2r ,..., x ( m-1 ) r+1 ... x mr ] (13)
Let the following series denoted as Shannon information spectrum (SP) of pattern X :
I SP ( r ) ( X ) = I S ( X ( r ) ) ,r=1,...,[ N/2 ] (14)
From the sequences X ( r ) , we omit (truncated) partitions shorter than r , those that are shorter than r . In the cases r>[ N/2 ] , I SP ( X ( r ) ) =0 , so these are also omitted from the spectrum.
image: images/blog/publications/multiscale_info_measurement/0_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i_____ci__tartalom_m__r__si_m__dszer_K__pek_ISP.png
Figure 1. Diagram A shows the Shannon information spectrum of the random pattern X A , and diagram B shows the repeating pattern X C (Appendix I). It can be seen that in case B , a lower value appears at certain frequencies.

5.2 Maximal Shannon information spectrum

The Shannon information spectrum will be maximum in the case of random data sets. Let the following formula denoted as maximum Shannon information spectrum (SMS):
I SMS ( r ) ( X ) =mlo g 2 ( min( K r ,m ) ) ,r=1,...,[ N/2 ] (15)
I SMS ( r ) ( X ) is a supremum for all information spectrum having the same value set and pattern length. If K r <m , then in the case of random patterns, the value set of the partitioning contains most likely all possible partitions, so the information content is approximately mlo g 2 ( K r ) . If K r >m , then the partitioning cannot contain all possible partitions, each partition will most likely be unique, so the information content will be mlo g 2 ( m ) . If r is small enough, then the series X ( r ) most likely contains all possible partitions, therefore by random data sets the measured amount of information will approximately equal the maximum possible information content of the pattern, i.e. if r is small, then I SPM ( r ) ( X ( r ) ) I MAX ( X ( r ) ) =Nlo g 2 ( n ) .
image: images/blog/publications/multiscale_info_measurement/1_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i___artalom_m__r__si_m__dszer_K__pek_ISMS_-_ISP.png
Figure 2. Comparison of maximum Shannon information spectrum (ISMS) and Shannon information spectrum (ISP) of the repeating pattern X C .

5.3 Shannon normalized information spectrum

If we are interested in how much the information content seems to be relative to the maximum value in each resolution, we can normalize the spectrum with the maximum spectrum to the range [ 0-Nlo g 2 ( n ) ] . Let the following sequence denoted as Shannon normalized information spectrum (SNS):
I SNS ( r ) (X) ={ I SP ( r ) ( X ) I SMS ( r ) ( X ) I MAX ( X ) , if| R X ( r ) |>1 r I SP ( 1 ) ( X ) N , if| R X ( r ) |=1 wherer=1,..., [N/2] (16)
If the value set of the partitioning has only one element, i.e. | R X ( r ) |=1 , the normalized value would be 0 . In this case the information content should be the information content of the repeating partition, and the average Shannon entropy of an element of the elementary resolution is multiplied by the length of the partition: r I SP ( 1 ) ( X ) N .
image: images/blog/publications/multiscale_info_measurement/2_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i____ci__tartalom_m__r__si_m__dszer_K__pek_ISNS.png
Figure 3. Comparison of Shannon normalized information spectrum (S of patterns from very different sources. The vertical axis represents the amounts of bits of information measured at the given resolution. You can see how different the spectrum of the different patterns is, but in most cases there is a resolution where the information content shows a definite minimum. The minima are marked with an arrow. A: random binary pattern, B: binary pattern with repeating sections, C: DNA section, D: English text, E: ECG signal, F: audio recording containing speech, G: evolution of the number of sunspots between 1700-2021, H: seismogram , I: Lena's photo.
The figures show that different types of patterns have very different and characteristic spectra. This suggests that the type or source of the pattern may be inferred from the nature of the spectrum, but we do not deal with this in this study.

5.4 SSM information

We know that the Shannon information gives an upper estimate in all cases, so we get the most accurate approximation of the information content from the normalized spectrum if we take the minimum. Let the information content calculated from the normalized spectrum denoted as Shannon spectrum minimum information (SSM information):
I SSM ( X ) = min i=1 [ N/2 ] ( I SNS ( i ) ( X ) ) (17)
Shannon information, SSM information and compression complexity of different patterns (Appendix I) in bits:
Pattern
Source
I MAX ( X )
I S ( X )
I SSM ( X )
I ZIP ( X )
I 7Z ( X )
I ZPAQ ( X )
X A
Random binary pattern.
48
46
40
X B
Repeating binary pattern.
48
48
2
X C
Repeating binary pattern.
48
48
13
X D
Repeating text.
362
343
58
X E
Duplicate text with one character error.
374
347
116
X F
Random DNA pattern.
471
422
409
X G
DNA segment of COVID virus.
471
405
388
X H
Random string (0-9, a-z, A-Z).
1209
1174
1174
X I
English text (James Herriot's Cat Stories).
1104
971
971
X J
Solar activity between 1700-2021 (A-Z).
1495
1349
1295
X K
Isaac Asimov: True love.
50901
37266
32649
30904
29968
25248
X L
Binary ECG signal.
80000
79491
47646
52320
41032
36968
X M
Binary seismic data.
313664
312320
171546
83920
66064
45824
X N
Speech recording.
325472
325342
277489
286760
257856
251408
X O
Lena.
524288
524216
422085
443096
371360
337408
Table 1. Comparison of SSM information and compression complexity of different patterns.
Relative Shannon information, SSM information, and compression complexity of different patterns (Appendix I) compared to maximum information:
Pattern
Source
I S ( rel ) ( X ) %
I SSM ( rel ) ( X ) %
I ZIP ( rel ) ( X ) %
I 7Z ( rel ) ( X ) %
I ( rel ) ZPAQ ( X ) %
X K
Isaac Asimov: True love.
73
64
61
59
50
X L
Binary ECG signal.
99
60
65
51
46
X M
Binary seismic data.
100
55
27
21
15
X N
Speech recording.
100
85
88
79
77
X O
Lena.
100
81
85
71
64
Table 2. Comparison of relative SSM information and relative compression complexity of different patterns.
It can be seen from the table that the SSM information gives similar results as the compression algorithms. In general, it is true that the more computationally demanding a compression or information measurement procedure is, the closer it is to Kolmogorov complexity. In the examined examples, the results of SSM information are usually located between the results of ZIP and 7Z, so the computational complexity of SSM information must be similar to the computational complexity of ZIP and 7Z.
image: images/blog/publications/multiscale_info_measurement/3_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i___om_m__r__si_m__dszer_K__pek_IS_ISSM_ZPAQ_EN.png
Figure 4. Comparison of the results of different information measurement methods.
image: images/blog/publications/multiscale_info_measurement/4_media_PROJECTS_Publik__ci__k_2023_-_Shannon_i____si_m__dszer_K__pek_ISSM_ZIP_7Z_ZPAQ_AVR_EN.png
Figure 5. Comparison of the average results of different information measurement methods.

5.5 Comparison with computational complexity

If we do not know the signal set of the signal sequence, the first step is to determine the number of signals occurring in the signal sequence, which has an asymptotic complexity of O ( NlogN) .
Determining the Shannon information consists of two steps. In the first step, we determine the frequency of signals, which has a complexity of O ( N) , and in the second step, we sum up the entropy of each signal, so the total complexity of the Shannon information is O ( NlogN)+ O ( N)= O ( NlogN) .
For the ZIP, 7Z and ZPAQ algorithms used to calculate the compression complexity, the complexity is usually between O ( N) and O ( NlogN) , but for ZPAQ may be greater [7] [12] [11].
In the case of SSM information, the first step is also to determine the frequency of signals, which has a complexity of O ( N) . In the second step, the Shannon information spectrum is calculated O ( N)+ O ( N/2)+ O ( N/3)+...+ O ( 2)= O ( NlogN) complexity, finally the minimum of the spectrum can be determined O ( N) with complexity. The complexity of calculating the SSM information in the worst case is O ( I SSM ( X ) )= O ( NlogN)+ O ( N)+ O ( NlogN)+ O ( N)= O ( NlogN) , which is identical to compression algorithms.

5.6 Known issues

All methods of calculating the amount of information have inaccuracies. One of the problems with SSM information is that if the repetition in a repeating pattern is not perfect, the value of the SSM information is larger than expected, as shown in the example below.
X
I SSM ( X ) [bit]
123456789 123456789 123456789
29
223456789 123456789 123456789
50
Table 3. One element change can cause a notable difference in SSM information.

Conclusion

It can be shown SSM information can determine the information content of the patterns with an accuracy comparable to the compression algorithms, but at the same time it is simple. In addition information spectrum presented here provides a useful visual tool for studying the information structure of patterns in the frequency domain.

References

  1. Scoville, John, "Fast Autocorrelated Context Models for Data Compression", (2013).
  2. Laszlo Lovasz, Complexity of Algorithms (Boston University, 2020).
  3. Ben-Naim, Arieh, "Entropy and Information Theory: Uses and Misuses", Entropy (2019).
  4. Pieter Adriaans, "Facticity as the amount of self-descriptive information in a data set", (2012).
  5. Juha Karkkainen, "Fast BWT in small space by blockwise suffix sorting", Theoretical Computer Science (2007).
  6. A. N. Kolmogorov, "On tables of random numbers", Mathematical Reviews (1963).
  7. Laszlo Lovasz, "Information and Complexity (How To Measure Them?)", The Emergence of Complexity in Mathematics, Physics, Chemistry and Biology, Pontifical Academy of Sciences (1996).
  8. Anne Humeau-Heurtier, "The Multiscale Entropy Algorithm and Its Variants: A Review", Entropy (2015).
  9. Allen, Benjamin and Stacey, Blake and Bar-Yam, Yaneer, "Multiscale Information Theory and the Marginal Utility of Information", Entropy (2017).
  10. Goldberger, A. and Amaral, L. and Glass, L. and Hausdorff, J. and Ivanov, P. C. and Mark, R. and Stanley, H. E., "PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals.", Circulation (2000).
  11. Markus Mauer, Timo Beller, Enno Ohlebush, "A Lempel-Ziv-style Compression Method for Repetitive Texts", (2017).
  12. Grunwald, Peter and Vitanyi, Paul, "Shannon Information and Kolmogorov Complexity", CoRR (2004).
  13. Claude E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal (1948).
  14. Ervin Laszlo, Introduction to Systems Philosophy (Routledge, 1972).
  15. Olimpia Lombardi and Federico Holik and Leonardo Vanni, "What is Shannon information?", Synthese (2015).

Appendix
I. Example patterns
Notation
A pattern or a detail of the pattern
Length
Explanation
X A
001101101010111001110010001001000100001000010000
48 bit
Random binary pattern.
X B
101010101010101010101010101010101010101010101010
48 bit
Repeating binary pattern.
X C
111111110000000011111111000000001111111100000000
48 bit
Repeating binary pattern.
X D
The sky is blue. The sky is blue. The sky is blue.
101 characters
Repeating text.
The sky is blue. The sky is blue. The sky is blue.
X E
The sky is blue. The sky is blue. The sky is blue.
101 characters
Duplicate text with one character error.
The sky is blue. The sky is glue. The sky is blue.
X F
cagtttctagctatattagcgggcacgactccactgcgcctatgcggaag
200 characters
Random DNA pattern.
cttgatcaaattttgaccagatcttaggtaacctgaacaagtcagttcgt
aggcgtcgattggccgacgggtgcgaagaaaaaagtgatcgttgtccaac
atctctagtacccaccgttgtgatgtacgttatacggacacgagcatatt
X G
cggcagtgaggacaatcagacaactactattcaaacaattgttgaggttc
200 characters
DNA segment of COVID virus.
aacctcaattagagatggaacttacaccagttgttcagactattgaagtg
aatagttttagtggttatttaaaacttactgacaatgtatacattaaaaa
tgcagacattgtggaagaagctaaaaaggtaaaaccaacagtggttgtta
X H
EK8Pi5sv2npTfzoaMNp87QtT5kbIUQkTJzHwICCstSmg4aksHT
200 characters
Random string (0-9, a-z, A-Z).
MwztgHFg3j8AoIobN3FycCLidGeyROiNyG5itB9kxyez1LZjFF
HIBjipE7hidZyiJmilXM0mwnxzlzWSfQ0xP1OuFpWosMwS1cjY
t4nyv4ONx1FceWkAf8SdvDGZVzeVzq2EmOqRF6Im2iudcYRswj
X I
I think it was the beginning of Mrs. Bond's
221 characters
English text (James Herriot's Cat Stories)
unquestioning faith in me when she saw me
quickly enveloping the cat till all you could
see of him was a small black and white head
protruding from an immovable cocoon of cloth.
X J
ABCDFIEDBBAAAABEHJJGEEDBDGMSPLHFBACFKMRPLGDCA[...]
321 characters
Solar activity between 1700-2021 (A-Z).
X K
My name is Joe. That is what my colleague,
8391 characters
Isaac Asimov: True love.
Milton Davidson, calls me. He is a programmer and
I am a computer program. [...]
X L
1011000100110011101110111011001100110011[...]
80000 bit
Binary ECG signal [4].
X M
110000101000000011000010100000001100001010000[...]
313664 bit
Binary seismic data.
X N
0101001001001001010001100100011011100100[...]
325472 bit
Speech recording.
X O
1010001010100001101000001010001010100011[...]
524288 bit
Lena (256x256 pixel, grayscale).
Written by:
Zsolt Pocze
Volgyerdo Nonprofit Kft., Nagybakonak, HUN, 2023

Blog

Sending Emails via Gmail

On May 31, 2022, Google disabled the ability to send emails from external applications using your own password. Therefore, you need to create an app password to continue sending emails from our software. Setting up an app password can be done with very simple steps, requiring no technical expertise—just follow the steps described here!

Easily manage paper invoicing with InvoicePad 2

We're thrilled to unveil InvoicePad 2, the future of hassle-free paper-based invoicing compliant with NAV standards. Blending sophisticated invoicing with integrated CRM and inventory management systems, InvoicePad 2 offers a comprehensive business solution for today's digital age.

Introducing Freight 3: The Ultimate Transport Register Software

We're excited to announce the release of Freight 3, our latest transport register software, meticulously designed for professionals in the transportation industry. Freight 3 serves as the perfect solution for recording and invoicing transport operations, managing vehicles, drivers, and customers. Discover an unparalleled software experience with our newest features and enhanced functionalities.

Introducing CustomerRegister 3: The Simple Customer Registration Software (CRM)

Empower your business with the new and improved CustomerRegister 3, the state-of-the-art customer registration software tailored for companies seeking speed, flexibility, and enhanced functionality. Our platform has been meticulously designed to optimize and transform your customer registration experience.

Introducing CustomerRegister 2: Your Ultimate Free CRM Solution!

Easy-to-use, flexible, and entirely free, the new CustomerRegister 2 is here to redefine the way you manage your customer data.

The Future of Stock Management Software

Experience the evolution in inventory management with InventoryManager 3, a blend of efficiency, flexibility, and innovation. Go beyond traditional stock management to optimize your inventory process with advanced features and AI-driven capabilities.

Redefining Inventory Management for Small Businesses

Streamlining stock management has never been easier. InventoryManager 2 is not just another free inventory software; it's a revolutionary platform that marries user-friendliness with advanced AI capabilities to enhance text processing for businesses.

Introducing JobCard 2: The Simple and Free Service Repair Shop Software

Available for Windows and Linux 64-bit, JobCard 2 is the perfect solution for car repair services, computer repair shops, machine service, repair workshops, and software development companies.

Introducing JobCard 3: The Ultimate Service Repair Shop Software

Customizable software for car repair services, computer repair shops, machine service, repair workshops, software development companies.

Multi-scale information content measurement method based on Shannon information

The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

Effective Handling of Big Images in Java

I was searching for a simple and effective solution for loading and manipulating very large images with BufferedImage. The problem was that BufferedImage stores the whole image in the memory.

Visitors

Today49
Yesterday49
This Week355
This Month523
All Days151007
Statistik created: 2024-11-08T14:16:29+01:00