Several regulars have requested that I put together a short OP and I’ve agreed to do so out of deference to them. Let me be clear from the outset that this is not my preferred course of action. I would rather discuss in a more interactive way so that I can learn from criticism and modify my thoughts as I go along. OPs are a little too final for my tastes.
I want to emphasize that everything I say here is tentative and is subject to modification or withdraw as feedback is received,
It’s important to understand that I speak for no one but myself it is likely that my understanding of particular terms and concepts will differ from others with interest in ID. I also want to apologize for the general poor quality of this piece I am terrible at detail and I did not put the effort in I should have due mainly to laziness and lack of desire.
With that out of the way:
Background
For the purpose of this discussion I would like to expand upon the work of Phill Mcguire found here and stipulate that cognition can be seen as lossless data compression in which information is integrated in a non-algorithmic process. The output of this process is a unified coherent whole abstract concept that from here forward I will refer to as a specification/target. Mcguire’s work thus far deals with unified consciousness as a whole but I believe his incites are equally valid when dealing with integrated information as associated with individual concepts.
I am sure that there are those who will object to the understanding of cognition that I’m using for various reasons but in the interest of brevity I’m treating it an an axiomatic starting point here. If you are unwilling to accept this proviso for the sake of argument perhaps we can discuss it later in another place instead of bogging down this particular discussion.
From a practical perspective cognition works something like this: in my mind I losslessly integrate information that comprise the defining boundary attributes of a particular target; for instance,”house” has such information as “has four walls”, “waterproof roof”, “home for family”, “warm place to sleep”, as well as various other data integrated into the simple unified “target” of a house that exists in my mind. The process by which I do this can not be described algorithmically. from the outside it is a black box but it yields a specified target output: the concept of “house”.
Once I have internalize what a house is I can proceed to categorize objects I come across into two groups: those that are houses and those that are not. You might notice the similarity of this notion to the Platonic forms in that the target House is not a physical structure existing somewhere but an abstraction.
Argument
With that in mind, it seems reasonable to me to posit that the process of design would simply be the inverse of cognition.
When we design something we begin with a pre-existing specific target in mind and through various means we attempt to decompress it’s information into an approximation of that target. For instance I might start with the target of house and through various means proceed to approximate the specification I have in my mind into a physical object. I might hire a contractor nail and cut boards etc . The fruit of my labor is not a completed house until it matches the original target sufficiently to satisfy me. However, no matter how much effort I put into the approximation, it will never completely match the picture of an ideal house that I see in my mind. This is I believe because of the non-algorithmic nature of the process by which targets originate. Models can never match their specification exactly.
Another good example of the designing process would be the act of composing a message.
When I began to write this OP I had an idea of the target concept I wanted to share with the reader and I have proceeded to go about decompressing that information in a way that I hoped that could be understood. If I am successful after some contemplation a target will be present in your mind that is similar to the one that exists in mine. If the communication was perfect the two targets would be identical.
The bottom line is that each designed object is the result of a process that has at its heart an input that is the result of the non-algorithmic process of cognition (the target). The tee shirt equation would look like this
CSI=NCF
Complex Specified Information is the result of a noncomputable function. If the core of the design process (CSI) is non-computable then the process in its entirety can not be completely described algorithmically,
This insight immediately suggests a way to objectively determine if an object is the result of design. Simply put if an algorithmic process can fully explain an object then it is not designed. I think this is a very intuitive conclusion, I would argue that humans are hardwired to tentatively infer design for processes that we can’t fully explain in a step by step manner. The better we can explain an object algorithmically the weaker our design inference becomes. If we can completely explain it in this way then design is ruled out.
At some point I hope to describe some ways that we can be more objective in our determinations of whether an object/event can be fully explained algorithmically but as there is a lot of ground covered here so I will put it off for a bit. There are also several questions that will need to be addressed before this approach can be justifiably adopted generally such as how comprehensive an explanation must be to rule out design or conversely when we can be confident that no algorithmic explanation is forthcoming.
If possible I would like to explore these in the future perhaps in the comments section. It will depend on the tenor of feed back I receive.
peace
You have a private definition of computable number, different from the one that Turing introduced along with his machines in “On Computable Numbers with an Application to the Entscheidungsproblem” (1936). You wouldn’t be talking about the rational numbers, would you?
Hey Tom,
As much as I’d like to shoot the breeze with you on this one It’s sort of off topic. So if I could avoid talking about it right now I would.
I’ll quickly cover it again just in case you missed this discussion before. hopefully we can lay it to rest.
I don’t care to much what its called. You can call it anything you like. As you can probably guess I don’t tend to get too hung up on definitions with these sorts of discussions. I would hope that if someone has questions about what I mean by a particular word they would just ask. IMO that is a big part of how people learn to understand each other.
I’m sure you will think that this is all just another a sign that I don’t understand the concepts involved. That is Ok you might be right so feel free to ignore me if you like.
If I was writing a paper about Pi I guess I would try and be precise.
When it comes to non-computablity and PI I’m really only talking about is that there are some things like an infinite number of non-repeating digits that are beyond the abilities of finite a Turing machine operating for a finite period of time.
That is all.
The reason I don’t just use the term irrational is because there are an infinite number of irrational numbers but only a very few are specified. It’s those that interest me at this point.
I suppose I could talk about Pi being a Irrational constant and that would come close to capturing the gist of what I want to say but then I would have to spell out what I mean by constant each time (ie that it is specified by very short description).
It’s just easier and to say it’s non computable and specified. That captures the exact thought I want to convey at this point. Which is that all the digits of PI are beyond the scope of a computer but their information is completely integrated in the simple idea of the ratio of a circle’s circumference to its diameter.
If the discussion progresses at some point I would like to talk about a cool way of thinking about a random number as the product of an irrational process that is not specified.
So I don’t want folks to think that I believe irrational equals designed. Which I think would be a easy mistake to make.
again feel free to ignore all this and me if you like.
peace
I’m curious why repeating digits are excluded. Numbers like 1/3 are implied algorithms in the same sense as the algorithmic definition of pi. There is no string of digits exactly equal to 1/3.
petrushka asks,
I’m curious why repeating digits are excluded.
I say,
check it out
http://en.wikipedia.org/wiki/0.999…
and
http://en.wikipedia.org/wiki/Repeating_decimal
for extra credit check this out
http://en.wikipedia.org/wiki/Schizophrenic_number
Think about all of this in the context of comparing a line-graph of the original with one of a nearly identical model
I think you will understand
Peace
So .9999… Is equal to 1.
What finite string is equal to 1/3?
Let me elaborate. 1/3 represents one divided by three, an operation that does not terminate. Pi can also be exactly represented by an infinite series of integer divisions.
Hey Pertruska,
I think I understand what you are saying.
However for our purposes we are trying to discover if a particular number string can be explained by by an algorithmic process not if the algorithm terminates or not.
.333333…… can be explained algorithmically because it is simply 1/3. The algorithm is the specification.
Pi on the other hand cannot be expressed as a ratio of integers. There is no algroythym that can produce pi it can only be approximated.
4-4/3+4/5-4/7+4/9……. is not Pi. It is only an approximation, The same goes for any other possible algroythym.
check it out
http://en.wikipedia.org/wiki/Squaring_the_circle
and
http://en.wikipedia.org/wiki/Approximations_of_%CF%80
I hope that helps.
peace
But you are simply wrong. The algorithm exactly specifies pi, just as .999… exactly specifies 1. 1/3 cannot be written exactly as a decimal number for the same reason pi cannot. The algorithm does not terminate.
Perhaps I should clarify. Please state the limit of precision for the algorithms calculating pi. At what digit position do they return an error.
Your premise is that you can compare strings. There is no string exactly representing 1/3.
fifthmonarchyman,
I’ve read through the paper you reference a couple of times. I have a few questions that I hope will give me better understanding of whatever argument it is that you are trying to make.
From your OP:
This reminds me of the famous physics joke that ends with assume a spherical cow. It seems to me that you’re assuming away most of the challenging components of your argument.
Consider, for example, this excerpt from the beginning of the Maguire, Moser, Maguire, and Griffith paper:
What of a software implementation that does not use a database but instead uses recurrent neural networks for associative pattern recognition? Since there is no way to easily remove one scent from such a network, does it meet the definition of integrated information by the criteria outlined in the paper? Can such a network therefore be said to be conscious in that sense?
Another assertion made early in the paper also needs to be better supported:
It seems to me that not only is human memory lossy, it deterioates (or at least changes) over time. Recalling memories alters them. I’m sure our site hostess could add some insight from her speciality on this issue.
petrushka asks,
Please state the limit of precision for the algorithms calculating pi. At what digit position do they return an error.
I say.
Algorithms approximating pi don’t work like that. They don’t give you the first 3 digits of pi with 1 iteration and the next 3 with the second iteration. What happens is that these algorithms converge toward Pi.
The more we run them the closer we get but we never match Pi exactly That is why they are called approximations and not representations.
for example the iterations of the algorithm I shared earlier look something like this
about –4
about– 2.66666666666666666666666666666666
about—3.46666666666666666666666666666666
about—2.8952380952380885714285714285714
about—3.3396825396825330158730154285714
etc etc.
They get closer and closer but never quite get there.
you say,
Your premise is that you can compare strings. There is no string exactly representing 1/3.
I say,
.3333333333 exactly represents 1/3 to 10 digits. .33333333333 exactly represents 1/3 to 11 digits. This is true when ever you decide to to halt the algroythym.
No matter when we decide to halt the model string it will match the original string exactly.
This is a completely different process and result than you get with algorithmic approximations of Pi.
hope that helps
peace
Hey Patrick,
You are correct that the most controversial part of my argument is my axiomatic assumption that cognition is a non algorithmic process.
I’m completely convinced by the arguments in the paper but I can understand why some others would not be yet.
I do hope that I’ve shown that if the paper’s conclusion is valid then design detection by a method similar to the one I’ve presented is possible.
That is a big part of what I wanted to get done here
If I could get some affirmation that Ive succeed in that I would be happy to start another thread dealing with the paper itself.
Or perhaps someone else could post a critique of the paper as a starting point
I think it would be an interesting discussion
peace
Hey Patrick,
check this out
http://www.evolvingai.org/fooling
I think you might find it interesting
peace
No, it isn’t.
I’m puzzled as to what that means.
I just ate some chocolate. I don’t think my eating chocolate was an algorithmic process. I also don’t think that it was a non-algorithmic process. Asking whether it was algorithmic seems to be asking an inappropriate question.
Similarly, asking whether cognition is algorithmic or non-algorithmic seems to be asking an inappropriate question.
What am I missing?
The decimal expansion of 1/3 will never equal 1/3. The expansion of pi via algorithm will never run out of correct digits. I suggest you test your understanding by submitting it to math journal.
fifthmonarchyman,
Some do, not all. Check out the BBP formula on this page for a neat way of calculating a particular digit of pi.
fifthmonarchyman,
I don’t see an argument for the issues I raised, just assertions. I’m interested in your answers to the questions I raised. Is a recurrent neural network an instance of integrated information, according to the definitions in the paper? Is it therefore cognizant of the associative memories it contains, again by Maguire et al.’s definitions?
You also need to address the issue of whether or not human memory is lossless. That’s a significant claim that requires real evidence to support.
I don’t think you’ve made it that far, personally. However, if you have a mechanism for design detection that can objectively distinguish between a designed string of bits and a non-designed string, I’d like to see it.
0.1 (base 3)
Tom English says,
What finite string is equal to 1/3?
I say,
I think you might be missing the point.
It’s not that .33333333 is equal to 1/3 but that it’s indistinguishable from 1/3 at any place we halt the algroythym
Patrick says,
Check out the BBP formula on this page for a neat way of calculating a particular digit of pi.
I say.
I agree it is cool but right now we are interested in the string not in a particular digit. I’m not aware of any algorithms that give you the string of digits in order. Are you ?
peace
family time I’ll be back later. Thanks for the interaction guys
fifthmonarchyman,
Yes. Since the BBP formula gives the value of a particular digit, simply iterating over it starting from 1 will give you the string of digits in order.
Then why not 1 base pi?
It would be best to reserve the term algorithm for definite procedures that halt in a finite number of steps. You’re referring to a procedure that does not halt. The distinction is important in this context.
There are various finite descriptions of
Nobody’s concept of the number gives all of the digits explicitly.
Consider an algorithm that, given a positive integer
outputs the first
digits of
(and halts). The algorithm is a conceptualization of
It provides an unlimited capacity to generate digits of
despite the fact that in any particular run it generates a limited number of digits.
Now, as for the nonsense about the Kolmogorov complexity of
Kolmogorov complexity is defined on finite sequences of bits. Let
be the infinite sequence of bits in the conventional base-2 numeric description of
The expressions
and
are ill-formed. However,
the length of the shortest binary program that outputs the first
bits of
on input of
(under binary representation), is defined for all positive integers
Let’s say we implement the algorithm described above with a
-bit program. Then we know that
for all
This appropriately indicates that there’s finite information in
It is unenlightening to observe that there’s no upper bound on the unconditional
A program not receiving
as an input must store a binary description of
within itself, using a number of bits on the order of
This is irrelevant to measurement of information in
itself.
I’ll mention, for contrast, the definition of a random infinite sequence: there’s a positive integer
such that
for all
What we had above was compression of every finite prefix to
or fewer bits. What we have here is compression of no finite prefix by more than
bits. The upshot is that the infinite sequence of bits describing
is, in the theory of Kolmogorov complexity, essentially the opposite of random.
To get an idea of why that doesn’t pan out, ask yourself what the digits would be in base
In any case,
cannot be calculated in finite time on a discrete computer.
Tom English said
The upshot is that the infinite sequence of bits describing \pi is, in the theory of Kolmogorov complexity, essentially the opposite of random.
I say,
It sounds like we agree for the most part. The important parts at least.
you say,
A program not receiving n as an input must store a binary description of n within itself, using a number of bits on the order of \log n. This is irrelevant to measurement of information in \pi itself.
I say,
Not sure I agree especially if n is infinite but It does not affect my point or my method in any way as far as I can tell so lets not argue about this one OK.
I’m not using my understanding of the K complexity of Pi as any kind of argument it’s just a way to think about what you said here
quote:
In any case, \pi cannot be calculated in finite time on a discrete computer.
end quote:
peace
Hey Patrick,
I will spend a little time answering your questions but to do the paper justice would require more time than I have right now
you say
Is a recurrent neural network an instance of integrated information, according to the definitions in the paper?
I say,
I don’t think so but I’m not familiar enough to say for sure. I’ll do some research and get back with you
you say,
You also need to address the issue of whether or not human memory is lossless
I say,
Human memory is not lossles as far as I can tell but memory is not the same thing as cognition or consciousness.
I have lots of memories that I am not conscious of. The article you linked to talked about the process of bringing memories like these into into conscious awareness. This would be a nonsensical notion if the two concepts were synonymous.
If I understand correctly when we talk about cognition we are talking about the process of combining diverse memories and or other information such as sensory input into unified compressed wholes.
For example my concept of who my wife is includes memories of time spent together and other information like the color of her eyes. With this information I form a mental picture of who she is .
The process of forming that picture is nonlossy I believe. If it was not we could change parts of the constituent information at random and my understanding of who my wife is would not be affected.
I expect I often forget things about my wife but the very act of forgetting means that these things were not part of the lasting mental picture I have of who she is.
I hope that makes sense
peace
I was trying to give a simple justification, and blew it:
It’s better to think in terms of beginning with the
-bit program that outputs
on input of
and modifying it to contain a particular value of
Then the length of the modified program is an upper bound on
The number of bits added to the program is on the order of
and
That is, there’s a constant
such that for all sufficiently large integers
the length of the shortest program that writes
and halts does not exceed 
I doubt that there’s a tighter bound than
but haven’t bothered to Google.
I wrote the original comment as a mini-tutorial, and not just for one person. Fifthmonarchyman has since brought up the matter of infinite
It’s important to understand that all integers are finite, even though there are infinitely many of them.
Tom English says,
It’s important to understand that all integers are finite, even though there are infinitely many of them.
I say,
You are correct and I understand but for my purposes I need to think of the entirety of the infinite sequence of digits so would not K(X1:n) be cumulative for each n in the series?
Perhaps this is just about the halting problem and all I’m really trying to say is that the algorithm needs to halt but only at a place that is always beyond my ability to distinguish between the model and Pi itself.
Is there an upper bound on the complexity of an algorithm with this feature?
peace
Neil Rickert says
What am I missing?
I say,
I don’t know but perhaps you are missing that if I’m correct to say that something is non algorithmic (and specified) is the same as saying it is a conscious act.
Patrick says,
I don’t think you’ve made it that far, personally.
I say,
it would be extremely helpful to me if you would rephrase what I’m proposing and tell me where you think it falls short. Again assuming that the paper’s conclusions are correct for the sake of argument
peace
That doesn’t parse.
Even if it parses it’s just another flavor of woo.
fifthmonarchyman,
I’m interested in your view on this. Based on the Maguire et al. paper, an associative memory implemented using a recurrent neural network does seem to meet their criteria.
From the paper:
They say “memory functions must be vastly non-lossy” which appears be manifestly not the case in humans. They continue with “otherwise retrieving them repeatedly would cause them to gradually decay.” What we see is that retrieving memories does in fact change them. While not synonymous with “decay” it seems still to contradict this unsupported assertion in the paper.
These two issues are from the first three or four pages of the paper. The rest of the paper depends on the non-lossy memory point in particular. On the face of it, it looks incorrect.
fifthmonarchyman,
What I believe you are trying to do is to support this claim from your OP:
You claim to be able to distinguish designed bit strings from non-designed bit strings, if I’m understanding you correctly. I’d be happy to provide a half-dozen strings of between 500 and 1000 bits to see how your mechanism works, if you’re willing to demonstrate it here.
That being said, I can’t accept the paper’s conclusions without the two issues I already raised being addressed. I’m usually up for a good ad argumentum, but this is too much like the spherical cow without gravity in a vacuum. We’ll end up discussing how many angels can dance on the head of a pin with wild assumptions like those. We need some rigorous operational definitions and support for the thus far baseless assertions in the paper.
Following up on my own comment, because it occurs to me that keiths, if I remember correctly, already pointed out that any sonnet can be produced by an algorithm. This is based on the observation that all sonnets are of finite length and any finite string can be produced algorithmically. That refutes your claim in your OP.
I’m still interested in your design detection mechanism. Let me know if the strings I proposed are of an appropriate length.
I simply can’t accept the idea that anything about human cognition or memory is lossless. Everything about human mentation is error prone. Not only that, but recent work suggests that specific memories can be removed.
All it takes is 15 minutes of Googling and reading to know that this is pretty much the response of cognitive scientists and neuroscientists. I hadn’t wanted to share what fifthmonarchyman would have found for himself, were he trying to draw sound conclusions rather than develop arguments. He says that he wants us to accept the conclusions of the paper for the sake of argument, when it is the assumptions that are the problem. The vaunted theorem — ooooh, proof! — tells us absolutely nothing about nature if the putative model of nature is bunk.
I’m a minor league computer geek, not a scientist. I have never heard the term lossy applied to degradation resulting from repeated access to memory.
petrushka,
That’s a good point. I’m also not sure what the Maguire et al. paper means by “vastly non-lossy”.
I do think that the claim or assumption that human memory is lossless requires evidentiary support. If the authors are using the word in an idiosyncratic way, an operational definition is in order.
I’m familiar with “lossy” in relation to compression.
If you take a data file, then use PKZIP to compress it, you will finish up with a smaller file (usually much smaller). This is called “lossless compression” because you can get the original file back.
If you take a digital image, as a file of pixel data, you might compress that to a “.jpg” file. This time, you have lossy compression. You cannot get all of the original pixel data back. But people looking at the picture usually cannot tell the difference. What is lost does not seem to be important to us.
Similarly your music CD was “.wav” data, which is a digital sampling of the music. Getting the “.wav” is already lossy, but we cannot tell the difference because of limitations of hearing. What is lost is above the highest pitch that we can detect. If you then convert that to a “.mp3” file, that is another round of lossy compression. From the “.mp3” file, you cannot get back the original “.wav” file. But most listeners cannot tell the difference.
My assumption is that “lossy” is being used in something like that sense.
I did not read the Maguire et al paper all the way to the end, because I doubt that human perception is at all similar to what they are describing.
I guess I could put that a different way. If the way human cognition works is at all like what AI researchers are doing, then human cognition is surely designed. I very much doubt that the AI model could evolve or would evolve. But I doubt that human cognition is anything like that.
I think the argument has an undistributed muddle.
Patrick says,
I’m interested in your view on this. Based on the Maguire et al. paper, an associative memory implemented using a recurrent neural network does seem to meet their criteria.
I say,
I have no problem with ascribing consciousness to physical processes that are nonlossy and non algorithmic. This is not an argument for dualism.
Patrick says,
I’d be happy to provide a half-dozen strings of between 500 and 1000 bits to see how your mechanism works, if you’re willing to demonstrate it here.
I say,
There is nothing difficult to my method Ive already posted it here I would think anyone could duplicate it rather quickly. It’s not rocket science
In my spare time I’m working on a shareable app that you could use to test strings yourself without going to the trouble of putting together the comparison environment I’m not a programer so it’s proving to be a tedious exercise.
Right now the game is crudely coded in a spread sheet if you provide a way to contact you via email I’d be happy to share.
In the mean time I would not mind plugging in a string or two for you. The longer the better the method is only as good as resolution of the measurement .
peace
Tom English says.
All it takes is 15 minutes of Googling and reading to know that this is pretty much the response of cognitive scientists and neuroscientists. I hadn’t wanted to share what fifthmonarchyman would have found for himself, were he trying to draw sound conclusions rather than develop arguments.
I say,
I’m well aware of the criticisms of the paper. Pretty much every paper has objections and criticisms some well founded some not so much. I would especially expect this to be the case when dealing with a subject like consciousness. The question will be if those objections and criticisms can be adequately addressed.
It is very early days the paper is not a couple of years old. I did not especially want to hash all that out now which is why I requested we take the conclusions as axiomatic for the sake of argument.
peace
petrushka says
I think the argument has an undistributed muddle.
I say,
please elaborate this is exactly the sort of information I want to know. That is why I’m here
thanks in advance
peace
Neil Rickert says,
My assumption is that “lossy” is being used in something like that sense.
I say,
That is my understanding of lossy as well.
With Non lossy compression on the other hand we should be able to decompress the file with out loosing any information at all.
Think of Pi again,
I don’t need to memorize the entire string of digits since I have nonlossily compressed them all with the concept of Pi and can reconstruct them whenever and to whatever extent I please.
The same goes with my concept of a house or my wife. All the information that I deem to be relevant is tied up in the compression and I can access it at will,
I think that is the sort of thing the paper is trying to get at.
peace
fmm – you said you’d reconsider your posting style based on other opinions. As a consumer, could I just say that I’d find blockquotes easier to follow?
Cheers
Allan
1. Your concept of lossy storage is hopelessly muddled.
2. Brains are not in any sense of the word lossless.
3. Starting with premises that are demonstrably wrong is not a good start.
4. You cannot distinguish pi from an algorithmically derived approximation by comparing finite strings.
5. None of this has any bearing on how brains work or how evolution works.
And how design works, either, I’d submit.
So, this is his start. It can only get curiouser. Please give some strings a go and report the results, fifthmonarchyman.
Allen Miller says,
I say,
As you wish
peace
Thank you. I also find it easier to follow with blockquote tags.
petrushka says,
I say,
I just using the standard definition from here
http://en.wikipedia.org/wiki/Lossy_compression
and Neil Rickert appears to agree with me. Exactly what part of that understanding do you find to be hopelessly muddled?
you say
I say:
Sure they are if they were not then we would not be able to even communicate because I must I assume that you are the same person now as you were when this discussion started.
you say
I say,
I agree, could you please demonstrate where my starting premises are wrong
thanks
you say.
I say,
sure I can.
compare the following strings
A) 3.14159265359
B) 3.3396825396
(A) is the first 12 digits of Pi and (B) is a string from the 5th iteration of an algroythym to approximate Pi. Surely you can tell the difference between the two strings
you say
I say,
Really? How can you be sure?
Are you saying that Evolution is not an algorithmic process?
Do you know definitively how the brain gives rise to consciousness? Have you cracked the hard problem?
peace