**Low-algorithmic-complexity entropy-deceiving graphs**

**Hector Zenil, Narsis A. Kiani, and Jesper Tegnér**

*4 May 2018*

Usually, the complexity of graphs are determined by using a Shannon-based measure on one of the graph's properties. An example is measuring the entropy of a graph's degree sequence. Random networks (usually E-R networks) are used as null models to compare with the networks of interest. This paper shows that many current Shannon-based measures of graph complexity aren't actually measuring the graph's complexity... it is actually measuring the complexity of some *chosen* property of the graph. Here's a great snippet that explains how this works with strings:

Take, for example, the bit string 01010101010101. The Shannon entropy of the string at the level of single bits is maximal, as there are the same number of 1s and 0s, but the string is clearly regular when two-bit (nonoverlapping) blocks are taken as basic units, in which case the string has minimal complexity because it contains only one symbol (01) among four possibilities (00, 01, 10, 11).

Maybe "fun" is the wrong word here, but here's an interesting exposé of how personalities affect the research atmosphere.

Start with this interesting paper,

then read this article in response,

and watch the drama ensue.

**Network Drama!**

*10 March 2018*

**Self-similarity of complex networks**

**Chaoming Song, Shlomo Havlin, & Hernán A. Makse**

*28 February 2018*

**Summary**

Modeled complex networks seem to be scale-free because they exhibit power laws and we can measure them. However, real-world networks also exhibit small-world properties which suggests they might not be scale-free networks. But if you take real complex networks and renormalize the structure by grouping nodes and edges, you get scale-free properties. This renormalization is best achieved with the box counting method because it accounts for hubs without double-counting them.

**Thoughts**

This somehow seems intuitive. I need to think more about how the hub-counting affects the number of boxes and the box weights. I do like how the normalization is decided by the number of ‘jumps’ it takes to get from one node to another. This has some causal implications, even if it wasn't the main focus of this paper. It would seem that grouping nodes is a ‘time-wise’ coarse-graining, meaning that the time it takes to go from nodes is boxed. But this breaks down at the concept of going from box to box. This might solve a lot of conceptual problems at what coarse-graining does and why networks have the properties that we currently see. This makes me think that the scale-free power low properties are a result of come baked-in coarse-graining of the system that is being observed. I wonder how true this is... Once again, I'm taken back to trying to understand what coarse-graining actually means in terms of causality and out ability to perceive it as living systems.