The International Society for Complexity, Information and Design [[i]] says this:
Complexity is one of those terms for which it is difficult to give a precise definition. Intuitively, it is thought of as a property or feature that implies the opposite of simplicity. Complexity is often used to describe single systems made of multiple interacting parts. However, complexity descriptions can be used for a large variety of applications.
Trying to find a more formal definition for complexity we could say this:
Complexity is a measure of the amount of information required to describe a dataset, without using the algorithm that generated the dataset.
This might become a bit clearer with an example. Consider a Koch snowflake segment shown here at the first 3 iteration levels, t=0, 1 and 2.
The raw data required to describe the shape clearly increases with the iterations, so we say that the complexity increases. This only applies if we are ignorant of the algorithm that generates the curve. If we did have access to the generating function, we could use it like a compression algorithm. Instead of having to specify 17 points we could then simply describe the dataset as “Koch curve at t=2”.
We will use the term “complexity” to be equivalent to the term “entropy”, and hence label a dimension with a complexity gradient as a time dimension.
Traditionally entropy, as disorder, implies a randomness. The generating functions that we investigate are deterministic, so the apparent randomness is only due to ignorance of the generating function and the initial condition.