## Multi-scale information content measurement method based on Shannon information

The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

Abstract: In this paper, we present a new multi-scale information content calculation method based on Shannon information (and Shannon entropy). The original method described by Claude E. Shannon and based on the logarithm of the probability of elements gives an upper limit to the information content of discrete patterns, but in many cases (for example, in the case of repeating patterns) it is inaccurate and does not approximate the true information content of the pattern well enough. The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

## 1 Introduction

Traditionally, Shannon's information theory  has been used to measure the information content of samples. Shannon information, as defined by Claude E. Shannon, is the degree of uncertainty or surprise associated with a given outcome in a set of possible outcomes. Shannon entropy, which is the expected value of Shannon information, is used to quantify the average information content of a discrete sample or message. It serves as a basic concept in information theory and is widely used in communication systems and data compression.
In some situations, such as repeated patterns, Shannon's original information measurement method does not give accurate results enough because it does not take into account the structure of the patterns, it only looks at certain statistical characteristics of them. To solve this problem, this paper presents a new multiscale information content calculation method based on Shannon's original principles. By refining the computational approach, our method offers a more accurate estimate of the internal information content of discrete samples, regardless of their nature.
There are several other methods for measuring the information content of patterns, such as Kolmogorov complexity , randomness , and compression complexity. The common property of these methods that they are all suitable for determining and understanding the information content of patterns with some accuracy, and therefore provide a suitable comparison basis for checking newer methods.
To verify the effectiveness of our new method, it is applied to various data sets and compared with compression algorithms. The results show that our proposed method based on Shannon information closely approximates the results measured by other methods while taking a completely different approach.

## 2 Patterns

In this study, we deal with the calculation of the internal quantitative information content of discrete patterns. From the point of view of the calculation of the information content, the nature of the object of the measurement is irrelevant. The information content of events, signals, system states, or data sequences can be calculated since their models (with finite precision) can all be represented as discrete patterns. By moving along a spatial pattern, we get a temporal pattern and vice versa. Therefore, we do not distinguish between spatial and temporal patterns. The basic markings should be as follows.
Denote$M\left(R\right)$ the set of finite sequences that can be generated from the set $R$:
$\begin{array}{cc}M\left(R\right)=\left\{X:{N}^{+}\to R\right\}& \left(1\right)\end{array}$
Let us call the finite sequence $X\in M\left(R\right)$ a pattern:
$\begin{array}{cc}X=\left[{x}_{1},...,{x}_{N}\right]& \left(2\right)\end{array}$
Denote the length of the series $X$:
$\begin{array}{cc}n\left(X\right)=N& \left(3\right)\end{array}$
Denote the set of possible values of the series $X$:
$\begin{array}{cc}R={R}_{X}=\left\{{r}_{1},{r}_{2},...,{r}_{K}\right\}& \left(4\right)\end{array}$
Let $f\left(x\right)$ denite the number of occurrences of $x\in {R}_{X}$ in the series of $X$:
$\begin{array}{cc}f\left(x\right)={\sum }_{i=1}^{K}\left[{r}_{i}=x\right]& \left(5\right)\end{array}$
Let the relative frequency of any $x\in R$ element of the pattern $X$:
$\begin{array}{cc}p\left(x\right)=f\left(x\right)/N& \left(6\right)\end{array}$
Denote the concatenation of ${X}_{1}{X}_{2}...{X}_{K}$ patterns as:
$\begin{array}{cc}{X}_{1}{X}_{2}...{X}_{K}=\underset{i=1}{\overset{K}{\parallel }}{X}_{i}& \left(7\right)\end{array}$

## 3 Information content

The information content can be interpreted intuitively when only the interpretable information content is examined . In this study we examine the amount of the entire internal information content without interpreting it or considering the context.
The information content of a pattern can be characterized by the improbability of individual elements of the pattern (Shannon information ), the length of the most concise description of the pattern (Kolmogorov complexity ), or the degree of randomness of the pattern .
A fundamental difference between Shannon's and Kolmogorov's viewpoints is that Shannon considered only the probabilistic characteristics of the random source of information that created the pattern, ignoring the pattern itself. In contrast, Kolmogorov only focused on the pattern itself . In their definition, Kolmogorov and Chaitin called (inaccurately) random the pattern with the maximum information content .
Information, complexity and randomness have such similar properties that we can reasonably assume that they are essentially approaching the same thing with different methods. It is sufficient to consider that the Shannon information, Kolmogorov complexity and randomness of a pattern consisting of identical elements are all minimal, while in the case of a true random pattern all three values are maximal, and they all assign the highest information value to data sets with maximum entropy .
The concepts of entropy and information are often confused , so it is important to mention that entropy can also be understood as such the average information content per element.
Approached intuitively, the amount of information is a function for which the following conditions are met:

1. The information content of a pattern with zero length or consisting of identical elements is zero.
2. The information content of the pattern consisting of repeating sections is (almost) identical to the information content of the repeating section.
3. A pattern and its reflection have the same information content.
4. The sum of the information content of patterns with disjoint value sets is smaller than the information content of the concatenated pattern.
5. The information content of true random patterns is almost maximal.
Let the information content be the function $I$ that assigns a non-negative real number to any arbitrary pattern $X\in M\left(R\right)$:
$\begin{array}{cc}I:{M}_{R}\to {R}^{+}& \left(8\right)\end{array}$
In addition, the following conditions are met:

1. $I\left(X\right)=0↔|{R}_{X}|<2$
2. $I\left(\underset{i=1}{\overset{K}{\parallel }}X\right)=I\left(X\right)$
3. $I\left(\underset{i=1}{\overset{K}{\parallel }}{X}_{i}\right)=I\left(\underset{i=K}{\overset{1}{\parallel }}{X}_{i}\right)$
4. $|\underset{i=1}{\overset{K}{\cap }}{R}_{{X}_{i}}|=\varnothing ⇒I\left(\underset{i=1}{\overset{K}{\parallel }}{X}_{i}\right)>\sum _{i=1}^{K}I\left({X}_{i}\right)$
5. $I\left(X\right)\le I\left({X}_{TR}\right),\forall X\in M\left(R\right),|X|=|{X}_{TR}|$, where ${X}_{TR}\in M\left(R\right)$ is a real random pattern.
Since any pattern can be described in non-decomposable binary form, the unit of information content should be the bit.
It can be seen that for any pattern $X\in M\left(R\right)$, if $N=n\left(X\right)$ and $K=|R|$, then the maximum information content of $X$ is:
$\begin{array}{cc}{I}_{MAX}\left(X\right)=N\cdot lo{g}_{2}\left(K\right)& \left(9\right)\end{array}$
That is, $I\left(X\right)\le {I}_{MAX}\left(X\right)$ for any pattern $X\in M\left(R\right)$. In the case of a binary pattern, ${I}_{MAX}\left(X\right)=N$, the length of the pattern, which means that a maximum of $N$ bits of information (decision) is required to describe the pattern.
If the maximum information content is known, the relative information content can be calculated:
$\begin{array}{cc}{I}^{\left(rel\right)}\left(X\right)=I\left(X\right)/{I}_{MAX}\left(X\right)& \left(10\right)\end{array}$

## 4 Shannon information

In theory, Kolmogorov complexity would provide a better approximation of the information content of patterns, but it has been proven that it cannot be calculated , in contrast to Shannon information , which can be calculated efficiently, but approximates the actual information content less well. Shannon information calculates the information content of the pattern based on the expected probability of occurrence (relative frequency) of the elements of the pattern.
The Shannon information of an arbitrary pattern $X\in M\left(R\right)$:
$\begin{array}{cc}{I}_{S}\left(X\right)={\sum }_{i=1}^{N}lo{g}_{2}\left(\frac{1}{p\left({x}_{i}\right)}\right)\right)& \left(11\right)\end{array}$
Since the relative frequency (expected occurrence) of the elements of the pattern is only one statistical characteristic of the pattern and does not take into account the order of the elements. That's why the Shannon information often gives a very inaccurate estimate of the information content.
The value of the Shannon information is the same for all patterns of the same length whose elements have the same relative frequency. If $X\in M\left(R\right)$, $Y\in M\left(Q\right)$ and $|R|=|Q|=K$ then it holds that:
$\begin{array}{cc}{I}_{S}\left(X\right)={I}_{S}\left(Y\right),\phantom{\rule{0ex}{0ex}}if\phantom{\rule{0ex}{0ex}}\left\{p\left({r}_{1}\right),p\left({r}_{2}\right),...,\left({r}_{K}\right)\right\}=\left\{p\left({q}_{1}\right),p\left({q}_{2}\right),...,\left({q}_{K}\right)\right\}& \left(12\right)\end{array}$
Shannon information ignores the structure of the patterns at different scales, the laws encoded in them, and therefore overestimates the information content of patterns consisting of repeating sections.
The problem can be illustrated with a simple example. Let's calculate the Shannon entropy of the following three patterns:

1. ${X}_{A}:\phantom{\rule{0ex}{0ex}}001101101010111001110010001001000100001000010000$
2. ${X}_{B}:\phantom{\rule{0ex}{0ex}}101010101010101010101010101010101010101010101010$
3. ${X}_{C}:\phantom{\rule{0ex}{0ex}}111111110000000011111111000000001111111100000000$
In all three cases, the set of values is $R=0,1$, the probability of each element is $p\left(0\right)=0.5$ and $p\left(1\right)=0.5$, and the Shannon entropy is ${I}_{S}\left(X\right)=\sum _{i=1}^{N}lo{g}_{2}\left(\frac{1}{p\left({x}_{i}\right)}\right)=16\phantom{\rule{0ex}{0ex}}bit$, although it is obvious that the information content of the data series differs significantly. Due to its randomness, the information content of data line ${X}_{A}$ is almost 16 bits, while the information content of the other two data lines is much smaller, as they contain repeated sections. In the ${X}_{B}$ data line, for example, the 2-bit section $\left[10\right]$ is repeated, which means that its information content is closer to 2 bits.
The problem is that in the example above, we are examining the datasets at an elementary level, and our Shannon entropy function does not take into account the larger-scale structure of the dataset, such as the presence of repeating segments longer than 1 signal. Therefore, it is obvious to develop methods that are based on the Shannon entropy, but the data series are analyzed in the entire spectrum of the resolution, in the entire frequency range, and thus provide a more accurate approximation of the information content of the data series. Countless such solutions have already been published, which can be read, for example, in the articles  and . This article presents an additional method.

## 5 SSM information

### 5.1 Shannon information spectrum

Let the pattern $X$ be partitioned by sections of length $r$ if $m=\left[N/r\right]$:
$\begin{array}{cc}{X}^{\left(r\right)}=\left[{x}_{1}...{x}_{r},{x}_{r+1}...{x}_{2r},\phantom{\rule{0ex}{0ex}}...,\phantom{\rule{0ex}{0ex}}{x}_{\left(m-1\right)\cdot r+1}...{x}_{m\cdot r}\right]& \left(13\right)\end{array}$
Let the following series denoted as Shannon information spectrum (SP) of pattern $X$:
$\begin{array}{cc}{I}_{SP}^{\left(r\right)}\left(X\right)={I}_{S}\left({X}^{\left(r\right)}\right),\phantom{\rule{0ex}{0ex}}r=1,...,\left[N/2\right]& \left(14\right)\end{array}$
From the sequences ${X}^{\left(r\right)}$, we omit (truncated) partitions shorter than $r$, those that are shorter than $r$. In the cases $r>\left[N/2\right]$, ${I}_{SP}\left({X}^{\left(r\right)}\right)=0$, so these are also omitted from the spectrum. Figure 1. Diagram $A$ shows the Shannon information spectrum of the random pattern ${X}_{A}$, and diagram $B$ shows the repeating pattern${X}_{C}$ (Appendix I). It can be seen that in case $B$, a lower value appears at certain frequencies.

### 5.2 Maximal Shannon information spectrum

The Shannon information spectrum will be maximum in the case of random data sets. Let the following formula denoted as maximum Shannon information spectrum (SMS):
$\begin{array}{cc}{I}_{SMS}^{\left(r\right)}\left(X\right)=m\cdot lo{g}_{2}\left(min\left({K}^{r},m\right)\right),\phantom{\rule{0ex}{0ex}}r=1,...,\left[N/2\right]& \left(15\right)\end{array}$
${I}_{SMS}^{\left(r\right)}\left(X\right)$ is a supremum for all information spectrum having the same value set and pattern length. If ${K}^{r}, then in the case of random patterns, the value set of the partitioning contains most likely all possible partitions, so the information content is approximately $m\cdot lo{g}_{2}\left({K}^{r}\right)$. If ${K}^{r}>m$, then the partitioning cannot contain all possible partitions, each partition will most likely be unique, so the information content will be $m\cdot lo{g}_{2}\left(m\right)$. If $r$ is small enough, then the series ${X}^{\left(r\right)}$ most likely contains all possible partitions, therefore by random data sets the measured amount of information will approximately equal the maximum possible information content of the pattern, i.e. if $r$ is small, then ${I}_{SPM}^{\left(r\right)}\left({X}^{\left(r\right)}\right)\approx {I}_{MAX}\left({X}^{\left(r\right)}\right)=N\cdot lo{g}_{2}\left(n\right)$. Figure 2. Comparison of maximum Shannon information spectrum (ISMS) and Shannon information spectrum (ISP) of the repeating pattern ${X}_{C}$.

### 5.3 Shannon normalized information spectrum

If we are interested in how much the information content seems to be relative to the maximum value in each resolution, we can normalize the spectrum with the maximum spectrum to the range $\left[0-N\cdot lo{g}_{2}\left(n\right)\right]$. Let the following sequence denoted as Shannon normalized information spectrum (SNS):
If the value set of the partitioning has only one element, i.e. $|{R}_{{X}^{\left(r\right)}}|=1$, the normalized value would be $0$. In this case the information content should be the information content of the repeating partition, and the average Shannon entropy of an element of the elementary resolution is multiplied by the length of the partition: $r\cdot \frac{{I}_{SP}^{\left(1\right)}\left(X\right)}{N}$. Figure 3. Comparison of Shannon normalized information spectrum (S of patterns from very different sources. The vertical axis represents the amounts of bits of information measured at the given resolution. You can see how different the spectrum of the different patterns is, but in most cases there is a resolution where the information content shows a definite minimum. The minima are marked with an arrow. A: random binary pattern, B: binary pattern with repeating sections, C: DNA section, D: English text, E: ECG signal, F: audio recording containing speech, G: evolution of the number of sunspots between 1700-2021, H: seismogram , I: Lena's photo.
The figures show that different types of patterns have very different and characteristic spectra. This suggests that the type or source of the pattern may be inferred from the nature of the spectrum, but we do not deal with this in this study.

### 5.4 SSM information

We know that the Shannon information gives an upper estimate in all cases, so we get the most accurate approximation of the information content from the normalized spectrum if we take the minimum. Let the information content calculated from the normalized spectrum denoted as Shannon spectrum minimum information (SSM information):
$\begin{array}{cc}{I}_{SSM}\left(X\right)=\underset{i=1}{\overset{\left[}{min}}\left({I}_{SNS}^{\left(i\right)}\left(X\right)\right)& \left(17\right)\end{array}$
Shannon information, SSM information and compression complexity of different patterns (Appendix I) in bits:
 Pattern Source ${I}_{MAX}\left(X\right)$ ${I}_{S}\left(X\right)$ ${I}_{SSM}\left(X\right)$ ${I}_{ZIP}\left(X\right)$ ${I}_{7Z}\left(X\right)$ ${I}_{ZPAQ}\left(X\right)$ X${}_{A}$ Random binary pattern. 48 46 40 X${}_{B}$ Repeating binary pattern. 48 48 2 X${}_{C}$ Repeating binary pattern. 48 48 13 X${}_{D}$ Repeating text. 362 343 58 X${}_{E}$ Duplicate text with one character error. 374 347 116 X${}_{F}$ Random DNA pattern. 471 422 409 X${}_{G}$ DNA segment of COVID virus. 471 405 388 X${}_{H}$ Random string (0-9, a-z, A-Z). 1209 1174 1174 X${}_{I}$ English text (James Herriot's Cat Stories). 1104 971 971 X${}_{J}$ Solar activity between 1700-2021 (A-Z). 1495 1349 1295 X${}_{K}$ Isaac Asimov: True love. 50901 37266 32649 30904 29968 25248 X${}_{L}$ Binary ECG signal. 80000 79491 47646 52320 41032 36968 X${}_{M}$ Binary seismic data. 313664 312320 171546 83920 66064 45824 X${}_{N}$ Speech recording. 325472 325342 277489 286760 257856 251408 X${}_{O}$ Lena. 524288 524216 422085 443096 371360 337408
Table 1. Comparison of SSM information and compression complexity of different patterns.
Relative Shannon information, SSM information, and compression complexity of different patterns (Appendix I) compared to maximum information:
 Pattern Source ${I}_{S}^{\left(rel\right)}\left(X\right)$ % ${I}_{SSM}^{\left(rel\right)}\left(X\right)$ % ${I}_{ZIP}^{\left(rel\right)}\left(X\right)$ % ${I}_{7Z}^{\left(rel\right)}\left(X\right)$ % ${I}^{\left(rel\right)}{}_{ZPAQ}\left(X\right)$ % X${}_{K}$ Isaac Asimov: True love. 73 64 61 59 50 X${}_{L}$ Binary ECG signal. 99 60 65 51 46 X${}_{M}$ Binary seismic data. 100 55 27 21 15 X${}_{N}$ Speech recording. 100 85 88 79 77 X${}_{O}$ Lena. 100 81 85 71 64
Table 2. Comparison of relative SSM information and relative compression complexity of different patterns.
It can be seen from the table that the SSM information gives similar results as the compression algorithms. In general, it is true that the more computationally demanding a compression or information measurement procedure is, the closer it is to Kolmogorov complexity. In the examined examples, the results of SSM information are usually located between the results of ZIP and 7Z, so the computational complexity of SSM information must be similar to the computational complexity of ZIP and 7Z. Figure 4. Comparison of the results of different information measurement methods. Figure 5. Comparison of the average results of different information measurement methods.

### 5.5 Comparison with computational complexity

If we do not know the signal set of the signal sequence, the first step is to determine the number of signals occurring in the signal sequence, which has an asymptotic complexity of $O\left(N\cdot logN\right)$.
Determining the Shannon information consists of two steps. In the first step, we determine the frequency of signals, which has a complexity of $O\left(N\right)$, and in the second step, we sum up the entropy of each signal, so the total complexity of the Shannon information is $O\left(N\cdot logN\right)+O\left(N\right)=O\left(N\cdot logN\right)$.
For the ZIP, 7Z and ZPAQ algorithms used to calculate the compression complexity, the complexity is usually between $O\left(N\right)$ and $O\left(N\cdot logN\right)$, but for ZPAQ may be greater   .
In the case of SSM information, the first step is also to determine the frequency of signals, which has a complexity of $O\left(N\right)$. In the second step, the Shannon information spectrum is calculated $O\left(N\right)+O\left(N/2\right)+O\left(N/3\right)+...+O\left(2\right)=O\left(N\cdot logN\right)$ complexity, finally the minimum of the spectrum can be determined $O\left(N\right)$ with complexity. The complexity of calculating the SSM information in the worst case is $O\left({I}_{SSM}\left(X\right)\right)=O\left(N\cdot logN\right)+O\left(N\right)+O\left(N\cdot logN\right)+O\left(N\right)=O\left(N\cdot logN\right)$ , which is identical to compression algorithms.

### 5.6 Known issues

All methods of calculating the amount of information have inaccuracies. One of the problems with SSM information is that if the repetition in a repeating pattern is not perfect, the value of the SSM information is larger than expected, as shown in the example below.
 $X$ ${I}_{SSM}\left(X\right)$[bit] 123456789 123456789 123456789 29 223456789 123456789 123456789 50
Table 3. One element change can cause a notable difference in SSM information.

## 6 Conclusion

It can be shown SSM information can determine the information content of the patterns with an accuracy comparable to the compression algorithms, but at the same time it is simple. In addition information spectrum presented here provides a useful visual tool for studying the information structure of patterns in the frequency domain.

## References

1. Scoville, John, "Fast Autocorrelated Context Models for Data Compression", (2013).
2. Laszlo Lovasz, Complexity of Algorithms (Boston University, 2020).
3. Ben-Naim, Arieh, "Entropy and Information Theory: Uses and Misuses", Entropy (2019).
4. Pieter Adriaans, "Facticity as the amount of self-descriptive information in a data set", (2012).
5. Juha Karkkainen, "Fast BWT in small space by blockwise suffix sorting", Theoretical Computer Science (2007).
6. A. N. Kolmogorov, "On tables of random numbers", Mathematical Reviews (1963).
7. Laszlo Lovasz, "Information and Complexity (How To Measure Them?)", The Emergence of Complexity in Mathematics, Physics, Chemistry and Biology, Pontifical Academy of Sciences (1996).
8. Anne Humeau-Heurtier, "The Multiscale Entropy Algorithm and Its Variants: A Review", Entropy (2015).
9. Allen, Benjamin and Stacey, Blake and Bar-Yam, Yaneer, "Multiscale Information Theory and the Marginal Utility of Information", Entropy (2017).
10. Goldberger, A. and Amaral, L. and Glass, L. and Hausdorff, J. and Ivanov, P. C. and Mark, R. and Stanley, H. E., "PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals.", Circulation (2000).
11. Markus Mauer, Timo Beller, Enno Ohlebush, "A Lempel-Ziv-style Compression Method for Repetitive Texts", (2017).
12. Grunwald, Peter and Vitanyi, Paul, "Shannon Information and Kolmogorov Complexity", CoRR (2004).
13. Claude E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal (1948).
14. Ervin Laszlo, Introduction to Systems Philosophy (Routledge, 1972).
15. Olimpia Lombardi and Federico Holik and Leonardo Vanni, "What is Shannon information?", Synthese (2015).

Appendix
I. Example patterns
 Notation A pattern or a detail of the pattern Length Explanation ${X}_{A}$ 001101101010111001110010001001000100001000010000 48 bit Random binary pattern. ${X}_{B}$ 101010101010101010101010101010101010101010101010 48 bit Repeating binary pattern. ${X}_{C}$ 111111110000000011111111000000001111111100000000 48 bit Repeating binary pattern. ${X}_{D}$ The sky is blue. The sky is blue. The sky is blue. 101 characters Repeating text. The sky is blue. The sky is blue. The sky is blue. ${X}_{E}$ The sky is blue. The sky is blue. The sky is blue. 101 characters Duplicate text with one character error. The sky is blue. The sky is glue. The sky is blue. ${X}_{F}$ cagtttctagctatattagcgggcacgactccactgcgcctatgcggaag 200 characters Random DNA pattern. cttgatcaaattttgaccagatcttaggtaacctgaacaagtcagttcgt aggcgtcgattggccgacgggtgcgaagaaaaaagtgatcgttgtccaac atctctagtacccaccgttgtgatgtacgttatacggacacgagcatatt ${X}_{G}$ cggcagtgaggacaatcagacaactactattcaaacaattgttgaggttc 200 characters DNA segment of COVID virus. aacctcaattagagatggaacttacaccagttgttcagactattgaagtg aatagttttagtggttatttaaaacttactgacaatgtatacattaaaaa tgcagacattgtggaagaagctaaaaaggtaaaaccaacagtggttgtta ${X}_{H}$ EK8Pi5sv2npTfzoaMNp87QtT5kbIUQkTJzHwICCstSmg4aksHT 200 characters Random string (0-9, a-z, A-Z). MwztgHFg3j8AoIobN3FycCLidGeyROiNyG5itB9kxyez1LZjFF HIBjipE7hidZyiJmilXM0mwnxzlzWSfQ0xP1OuFpWosMwS1cjY t4nyv4ONx1FceWkAf8SdvDGZVzeVzq2EmOqRF6Im2iudcYRswj ${X}_{I}$ I think it was the beginning of Mrs. Bond's 221 characters English text (James Herriot's Cat Stories) unquestioning faith in me when she saw me quickly enveloping the cat till all you could see of him was a small black and white head protruding from an immovable cocoon of cloth. ${X}_{J}$ ABCDFIEDBBAAAABEHJJGEEDBDGMSPLHFBACFKMRPLGDCA[...] 321 characters Solar activity between 1700-2021 (A-Z). ${X}_{K}$ My name is Joe. That is what my colleague, 8391 characters Isaac Asimov: True love. Milton Davidson, calls me. He is a programmer and I am a computer program. [...] ${X}_{L}$ 1011000100110011101110111011001100110011[...] 80000 bit Binary ECG signal . ${X}_{M}$ 110000101000000011000010100000001100001010000[...] 313664 bit Binary seismic data. ${X}_{N}$ 0101001001001001010001100100011011100100[...] 325472 bit Speech recording. ${X}_{O}$ 1010001010100001101000001010001010100011[...] 524288 bit Lena (256x256 pixel, grayscale).
Written by:
Zsolt Pocze
Volgyerdo Nonprofit Kft., Nagybakonak, HUN, 2023

## Blog

#### Easily manage paper invoicing with InvoicePad 2

We're thrilled to unveil InvoicePad 2, the future of hassle-free paper-based invoicing compliant with NAV standards. Blending sophisticated invoicing with integrated CRM and inventory management systems, InvoicePad 2 offers a comprehensive business solution for today's digital age.

#### Introducing Freight 3: The Ultimate Transport Register Software

We're excited to announce the release of Freight 3, our latest transport register software, meticulously designed for professionals in the transportation industry. Freight 3 serves as the perfect solution for recording and invoicing transport operations, managing vehicles, drivers, and customers. Discover an unparalleled software experience with our newest features and enhanced functionalities.

#### Introducing CustomerRegister 3: The Simple Customer Registration Software (CRM)

Empower your business with the new and improved CustomerRegister 3, the state-of-the-art customer registration software tailored for companies seeking speed, flexibility, and enhanced functionality. Our platform has been meticulously designed to optimize and transform your customer registration experience.

#### Introducing CustomerRegister 2: Your Ultimate Free CRM Solution!

Easy-to-use, flexible, and entirely free, the new CustomerRegister 2 is here to redefine the way you manage your customer data.

#### The Future of Stock Management Software

Experience the evolution in inventory management with InventoryManager 3, a blend of efficiency, flexibility, and innovation. Go beyond traditional stock management to optimize your inventory process with advanced features and AI-driven capabilities.

#### Redefining Inventory Management for Small Businesses

Streamlining stock management has never been easier. InventoryManager 2 is not just another free inventory software; it's a revolutionary platform that marries user-friendliness with advanced AI capabilities to enhance text processing for businesses.

#### Introducing JobCard 2: The Simple and Free Service Repair Shop Software

Available for Windows and Linux 64-bit, JobCard 2 is the perfect solution for car repair services, computer repair shops, machine service, repair workshops, and software development companies.

#### Introducing JobCard 3: The Ultimate Service Repair Shop Software

Customizable software for car repair services, computer repair shops, machine service, repair workshops, software development companies.

#### Multi-scale information content measurement method based on Shannon information

The new mathematical method presented here provides a more accurate estimate of the (internal) information content of any discrete pattern based on Shannon's original function. The method is tested on different data sets and the results are compared with the results of other methods like compression algorithms.

#### Effective Handling of Big Images in Java

I was searching for a simple and effective solution for loading and manipulating very large images with BufferedImage. The problem was that BufferedImage stores the whole image in the memory.

## Visitors

 Today 26 Yesterday 23 This Week 218 This Month 308 All Days 135483 