Working Definitions for the Design Detection Game/Tool

I want to thank OMagain in advance for doing the heavy lifting required to make my little tool/game sharable. His efforts will not only speed the process up immeasurably they will lend some much needed bipartisanship  to this endeavor as we move forward. When he is done I believe we can begin to attempt to use the game/tool to do some real testable science in the area of ID . I’m sure all will agree this will be quite an accomplishment.
Moving forward I would ask that in these discussions we take things slowly doing our best to leave out the usual culture warfare template and try to focus on what is actually being said rather than the motives and implications we think we see behind the words.

 

I believe now would be a good time for us to do some preliminary definitional housework. That way when OMagain finishes his work on the gizmo I can lay out some proposed Hypotheses and the real fun can hopefully start immediately.

 

It is always desirable to begin with good operational definitions that are agreeable to everyone and as precise as possible. With that in mind I would like to suggest the following short operational definitions for some terms that will invariably come up in the discussions that follow.

 

1.      Random– exhibiting no discernible pattern , alternatively a numeric string corresponding to the decimal expansion of an irrational number that is unknown to the observer who is evaluating it

2.       Computable function– a function with a finite procedure (an algorithm) telling how to compute the function.

3.       Artifact– a nonrandom object that is described by a representative string that can’t be explained by a computable function that does not reference the representative string

4.      Explanation –a model produced by a alternative method that an observer can’t distinguish from the string being evaluated

5.       Designer– a being capable of producing artifacts

6.       Observer– a being that with feedback can generally and reliably distinguish between artifacts and models that approximate them

Please take some time to review and let me know if these working definitions are acceptable and clear enough for you all. These are works in progress and I fully expect them to change as you give feedback.

Any suggestions for improvement will be welcomed and as always please forgive the spelling and grammar mistakes.

peace

541 thoughts on “Working Definitions for the Design Detection Game/Tool

  1. stcordova: There is a high failure rate in some realms of detecting non-random patterns.

    I think you will be surprised at the success rate of the game. I know I was.
    we shall see

    stcordova: The traditional ID arguments is framed in absolutist terms which I think makes little sense since we are dealing with practically undecidable propositions (unless God gives us a fireworks show and speaks from the heavens).

    I have repeatedly said in the end this will boil down to the problem of other minds.
    There is no way to prove that other minds exist at all it’s simply a position of faith

    A critic can always deny that a designer is personal. There is nothing that can ever compel them to acknowledge God.

    peace

  2. petrushka: As Patrick points out, one problem with The Paper is it’s simply wrong about how brains work.

    I would disagree
    That is a big part of the hypothesis that the game intends to test. You remember science

    peace

  3. Patrick: So memory functions are definitely not “vastly non-lossy” as assumed in the paper. That has a large impact on their conclusions.

    You are confusing “memory functions” with cognition these are two different things.

    “learning” the pattern of a string in the context of the game is a “vastly non-lossy” process at least that is my hypothesis. We shall see

    peace

  4. Patrick: What if instead of a database lookup the artificial smell detector trained a neural network to distinguish between the smells? The weights of the connections between each node would seem to meet the criteria for integrated information, by the definition used in the paper. Do you agree?

    How could an artificial smell detector train a neural network to distinguish between the smells unless it already knew how to distinguish between the smells?

    Patrick: Interestingly, training a neural network to calculate XOR is one of the first tasks assigned to students of machine learning. This makes it even more clear that the state of a trained neural network meets the definition of integrated information.

    I’m really not too familiar with machine learning.
    A good way to demonstrate that a trained neural network meets the definition is for it to reliably and generally distinguish between real and random data in the game.

    I think you were working on a little two week hack to do just that. Let me know when you finish it 😉

    peace

  5. fifthmonarchyman,

    So memory functions are definitely not “vastly non-lossy” as assumed in the paper. That has a large impact on their conclusions.

    You are confusing “memory functions” with cognition these are two different things.

    “learning” the pattern of a string in the context of the game is a “vastly non-lossy” process at least that is my hypothesis. We shall see

    I’m looking at the paper and it’s talking about memory. Memories are definitively not “vastly non-lossy”. Therefore the conclusion of the paper, that brain processes cannot be modeled computationally, is unsupported.

    Do you agree that the weights in a neural network meet the definition of “integrated information” as used in the paper?

  6. fifthmonarchyman,

    What if instead of a database lookup the artificial smell detector trained a neural network to distinguish between the smells? The weights of the connections between each node would seem to meet the criteria for integrated information, by the definition used in the paper. Do you agree?

    How could an artificial smell detector train a neural network to distinguish between the smells unless it already knew how to distinguish between the smells?

    Supervised learning. Let’s say you have a machine that can detect the same number of smells as a human. Humans have about 400 different types of scent receptors. That means there will be about 400 inputs to the network. If you want to distinguish between a million different smells, you’ll need about 20 binary outputs. In between you’ll have somewhere around 300 to 500 nodes in two hidden layers.

    To train the network you present it with examples of each smell you want to distinguish. The part that’s interesting in this context is that the total number of nodes and connections will be far less than the number of outputs supported. The system isn’t doing a one-to-one mapping as described in the paper. If you change any of the weights, in all likelihood you’ll affect the ability to categorize multiple smells. Again, this is very similar to how brains are described in the paper.

    Based on this, it seems that trained neural networks contain integrated information. Do you agree?

    A good way to demonstrate that a trained neural network meets the definition is for it to reliably and generally distinguish between real and random data in the game.

    I’m not asking about your game, I’m asking about the definitions used in the paper. Given what I just explained about the structure of neural networks, do you agree that they meet the criteria for integrated information? If not, why not?

  7. Patrick: I’m looking at the paper and it’s talking about memory.

    Memory is in fact a rather fuzzy concept.

    I would argue that “the memory function” you are talking about is not sort of thing the paper is referring to.

    I believe the paper is talking about a situation like this

    Suppose I was an evil scientist and using a mind ray I changed just one aspect of your memory of your wife. Lets say instead of a 120 women he/she I made you believe she was instead a 220 pound man.

    Would you say that your memory of her was mostly intact?

    Patrick: Do you agree that the weights in a neural network meet the definition of “integrated information” as used in the paper?

    I don’t think so
    If I changed one weight slightly would all the information encoded in the network be destroyed?

    peace

  8. fifthmonarchyman,

    I would argue that “the memory function” you are talking about is not sort of thing the paper is referring to.

    I believe the paper is talking about a situation like this

    Suppose I was an evil scientist and using a mind ray I changed just one aspect of your memory of your wife. Lets say instead of a 120 women he/she I made you believe she was instead a 220 pound man.

    Would you say that your memory of her was mostly intact?

    I don’t think this is analogous to the paper’s claims. Here are two paragraphs where they explain the difference between integrated and non-integrated information:

    According to Tononi’s (2008) theory, when somebody smells chocolate the effect that it has on their brain is integrated across many aspects of their memory. Let’s consider, for example, a human observer named Amy who has just experienced the smell of chocolate. A neurosurgeon would find it very difficult to operate on Amy’s brain and eliminate this recent memory without affecting anything else. According to the integrated information theory, the changes caused by her olfactory experience are not localised to any one part of her brain, but are instead widely dispersed and inextricably intertwined with all the rest of her memories, making them difficult to reverse. This unique integration of a stimulus with existing memories is what gives experiences their subjective (i.e. observer specific) flavour. This is integrated information.

    In contrast, deleting the same experience in the case of an artificial smell detector would be easy. Somewhere inside the system is a database with discrete variables used to maintain the detection history. These variables can simply be edited to erase a particular memory. The information generated by the artificial smell detector is not integrated. It does not influence the subsequent information that is generated. It lies isolated, detached and dormant.

    So, since a neural network can not “simply be edited to erase a particular memory” it appears to meet the definition of integrated information, correct?

    Do you agree that the weights in a neural network meet the definition of “integrated information” as used in the paper?

    I don’t think so
    If I changed one weight slightly would all the information encoded in the network be destroyed?

    No. Does that change your answer? Is there some reason you don’t want the definition of integrated information to include software constructs?

  9. Fifth hasn’t understood the tone discrimination example. There is no database and no way to selectively erase a memory or delete a function. The discriminator circuit is integrated.

    On the other hand, human memories can be lost, and there is evidence that memories can be removed. Even though they are integrated. They can fade with time or they can lose associations over time. They can also be falsely implanted or falsely recollected.

  10. Patrick: No. Does that change your answer?

    No it just confirms that a neural network does not meet the definition of integrated information

    Patrick: So, since a neural network can not “simply be edited to erase a particular memory” it appears to meet the definition of integrated information, correct?

    I’m confused. First you say that changing one weight will not destroy memory now you seem to imply it will? Apparently you need to clarify exactly what happens when you change the weights in the network

    Patrick: Is there some reason you don’t want the definition of integrated information to include software constructs?

    No particular reason except that,

    If software can distinguish between real and fake strings the same way I can with a “vastly non-lossy” process it would qualify as an observer according to my definition and design detection would be an algorithmic process. At that point I would argue design itself would qualify as algorithmic.

    My hypothesis is it can’t and that the game is an effective Turing test .

    We will have to wait for your hack to see if this particular idea is falsified 😉

    I love science

    peace

  11. petrushka: On the other hand, human memories can be lost, and there is evidence that memories can be removed. Even though they are integrated. They can fade with time or they can lose associations over time. They can also be falsely implanted or falsely recollected.

    Correct but totally irrelevant to the point of the paper or the game

    peace

  12. Apparently what you all are missing is the fact that the following string is an integrated whole

    123456789

    instead of 9 individual symbols the string is in fact one unified symbol expressing the pattern we see.

    You can’t change any single digit in the string with out destroying the integration and thus the pattern itself.

    peace

  13. fifthmonarchyman,

    No. Does that change your answer?

    No it just confirms that a neural network does not meet the definition of integrated information

    How so?

    So, since a neural network can not “simply be edited to erase a particular memory” it appears to meet the definition of integrated information, correct?

    I’m confused. First you say that changing one weight will not destroy memory now you seem to imply it will? Apparently you need to clarify exactly what happens when you change the weights in the network

    What you asked was “If I changed one weight slightly would all the information encoded in the network be destroyed?” Note the emphasis I added. The answer to that is no, just as destroying one neuron in a human brain will not destroy all memories.

    Destroying one weight or one neuron could well cause the loss of a small part of the memory. Since neural networks and brains are similar in this regard, it suggests that a neural net holds integrated information.

    Is there some reason you don’t want the definition of integrated information to include software constructs?

    No particular reason except that,

    If software can distinguish between real and fake strings the same way I can with a “vastly non-lossy” process it would qualify as an observer according to my definition and design detection would be an algorithmic process. At that point I would argue design itself would qualify as algorithmic.

    My hypothesis is it can’t and that the game is an effective Turing test .

    I still don’t see any reason to conclude that a trained neural network doesn’t contain integrated information by the definition used in the paper. Do you?

  14. fifthmonarchyman,

    petrushka: On the other hand, human memories can be lost, and there is evidence that memories can be removed. Even though they are integrated. They can fade with time or they can lose associations over time. They can also be falsely implanted or falsely recollected.

    Correct but totally irrelevant to the point of the paper or the game

    The excerpts I’ve provided from the paper say otherwise. Human memory being “massively non-lossy” is essential to the argument they are making.

  15. Patrick: Destroying one weight or one neuron could well cause the loss of a small part of the memory.

    perhaps this is part of the confusion “memory” can be a overarching thing or a particular thing.

    By “the memory” we mean a particular memory, The smell of chocolate or the pattern of a particular numeric string, That is what the paper is talking about.

    Would changing the weight in a neural network destroy a particular memory like these?

    peace

  16. So changing 1234556789 to something else destroys the pattern we see?

    Kidding aside, integrated patterns are likely to be artifacts of our perceptual machinery.

  17. fifthmonarchyman,

    By “the memory” we mean a particular memory, The smell of chocolate or the pattern of a particular numeric string, That is what the paper is talking about.

    Would changing the weight in a neural network destroy a particular memory like these?

    Just as with destroying a single neuron in a human brain, it might eliminate the ability to classify a smell into one or a few categories. It might just weaken the signal. It might have no visible effect.

    Given that the results are similar to those for human memory, do you have any reason to say that a trained neural network does not contain integrated information?

  18. Patrick: Human memory being “massively non-lossy” is essential to the argument they are making.

    You are confounding “overarching human memory” with particular memories like the smell of chocolate or the pattern in a particular string. These two ideas are not the same thing.

    It’s particular human memories that are vastly nonlossy.

    You can’t change any part of them with out altering them beyond repair. That is the point of the paper. It’s really rather simple.

    The pattern found in 123456789 is not the same one in 123546789.
    The smell of chocolate is not the same as the smell of heavy cream.

    peace

  19. Patrick: Given that the results are similar to those for human memory, do you have any reason to say that a trained neural network does not contain integrated information?

    You are assuming that “human memory” is nothing more the sum of neurons and their effect. I don’t think you have any grounds to make this assumption

    Patrick: it might eliminate the ability to classify a smell into one or a few categories. It might just weaken the signal. It might have no visible effect.

    so you really don’t know what will happen when you change the weights of a neural network so until you demonstrate that what’s going on is in fact nonlossy information integration the question remains open.

    Sounds like your little hack will be pretty important in definitively answering this question.

    I love science

    peace

  20. petrushka: Provide some evidence that human memory is something other than neurons and supporting cells.

    If a human can easily do something that computers can not in principal do ie distinguish between m (z) and m(z′),when z and z′are close it would tend to support the notion that human memory is not computable.

    That is why Patrick’s little hack is such a big deal here.

    I wonder how that one is coming along 😉

    peace

  21. petrushka:
    Provide some evidence that human memory is something other than neurons and supporting cells.

    My reading is that memory is more of a process, and the neurons and supporting cells are the substrate for that process. I suppose if we could extract memories from this substrate after death, we could get some idea how dynamic memory is.

  22. Memory is an action or behavior rather than a state, but the brain and any given time has a structure that does the behaving. The mind is what the brain does, but the brain does what the brain is.

  23. fifthmonarchyman: If a human can do something that computers can not in principal do ie distinguish betweenm (z) and m(z′),when z and z′are close it would tend to support the notion that human memory is not computable.

    The question is, how can you be certain of the principle? Personally, I would find it difficult if not impossible to describe a condition where two phenomena are different but the difference would be detectable to only one observer under all possible circumstances.

    And conversely, there are plenty of extremely similar phenomena people can distinguish only by using the appropriate instrumentation.

    Given the number of phenomena that were unsuspected until some new instruments blundered upon them, I doubt you’d find many scientists who believe we have found them all.

  24. petrushka:
    Memory is an action or behavior rather than a state, but the brain and any given time has a structure that does the behaving. The mind is what the brain does, but the brain does what the brain is.

    I wonder, if it were possible to take a snapshot of all brain structure and activity at some instant in time, whether we could capture an actual memory.

  25. Flint: The question is, how can you be certain of the principle?

    Did you read the paper? It provides mathematical proof

    Flint: I would find it difficult if not impossible to describe a condition where two phenomena are different but the difference would be detectable to only one observer under all possible circumstances.

    The beauty of the game is that it reduces diverse phenomena into a single context and to lets us compare potential observers and potential strings apples to apples

    Flint: And conversely, there are plenty of extremely similar phenomena people can distinguish only by using the appropriate instrumentation.

    The output of any measurement using any instrumentation can be represented by a numerical string.

    Flint: Given the number of phenomena that were unsuspected until some new instruments blundered upon them, I doubt you’d find many scientists who believe we have found them all.

    This enterprise is not about ruling out all potential measurement schema. It is about evaluating the output of whatever schema we happen to have.

    peace

  26. petrushka: What is it that humans can distinguish that computers cannot, in principle?

    If I’m right we can distinguish between m(z) and m(z′),when z and z′are close by looking at the integrated patterns in the representative strings.

    I thought we already covered that.

    Peace

  27. I do not believe it is possible to make a snapshot of a brain. If you follow the “toy” example of the tone discriminator, the behavior is integrated into the physical structure of the device–despite the fact that the circuits are digital. The same configuration might not work on a copy of the device. Certainly not on a device that copied only the logic circuits but was physically different.

    It seems almost certain that brain behavior involves timing and not just logic. Timing might be affected by variations in protein coding alleles.

  28. petrushka: Fifth, have you fead anything on this thread?

    I have and I know that
    Fifth, have you fead anything on this thread?
    can be distinguished from
    Fifth, have you read anything on this thread?

    The first statement is about whistleing by the way and has a completely different meaning than the second
    😉

  29. fifthmonarchyman: If I’m right we can distinguish between m(z) and m(z′),when z and z′are close by looking at the integrated patterns in the representative strings.

    I thought we already covered that.

    Peace

    Why can’t computers do the same thing in principle? I’d argue that, in principle, computers can pass the Turing test.

  30. Flint: Why can’t computers do the same thing in principle? I’d argue that, in principle, computers can pass the Turing test.

    Did you read the paper?

    Nonlossy integrated information is notcomputable. There is mathematical proof of this.

    The only question is whether what I’m doing when I learn the pattern of a string is in fact nonlossy integration.

    peace

  31. petrushka: And what follows from noticing a typo.

    It follows that changing just one character in a string can radically alter it’s meaning.

    Therefore sentences contain nonlossy integrated information.

    peace

  32. petrushka: That might just be the stupidest thing I’ve read this week.

    How so?
    Remember the definition of an integrating function

    i.e. the knowledge of m (z)does not help to describe m(z′),when z and z′are close.

    You have just seen the concept conclusively illustrated.

    perhaps you would you like me to whistle it for you
    😉
    peace

  33. Fifth, sentences can have multiple meanings even without differences. The same character string could be grammatically correct in multiple languages. It could have different connotations to different people. It could be a coded message with a secret meaning to some.

    The meaning is no something inherent in the string.

    Nor would the lack of apparent meaning be inherent to the string.

    Not woulg typos ususaaly prevent the meaning fron being clear ib modt cased.

  34. It appears that this will be another completely worthless thread where you simply define yourself to be correct.

  35. I think you will be surprised at the success rate of the game. I know I was.
    we shall see

    I was surprised that it recognized a shakespeare sonnet as a random string.

  36. Norm Olsen:
    Why does this remind me of “Ontogenetic Depth”.Much promised, never delivered.

    I haven’t seen anything promised yet. Not anything coherent. I suppose strings that are different are different. But I haven’t seen any response to the tone discriminator circuit. It is not a string nor can it be fully described by a string. An yet it is the result of an evolutionary process.

  37. petrushka: Fifth, sentences can have multiple meanings even without differences. The same character string could be grammatically correct in multiple languages. It could have different connotations to different people. It could be a coded message with a secret meaning to some.

    This is why the observer is so important to the game. The meaning of the string is what is read by the observer.

    What is important is whether or not the observer can distinguish a difference by learning the overall pattern when the strings are close.

    petrushka: It appears that this will be another completely worthless thread where you simply define yourself to be correct.

    Definitions are important. If you have a problem with what I’ve offered please let me know.

    petrushka: Th OP states that operational definitions will be forthcoming. When can we expect them.

    As soon as you give me specific feedback to improve what I’ve offered. I expect to post updated definitions before Omagain finishes his work.

    petrushka: I was surprised that it recognized a shakespeare sonnet as a random string.

    Yeah live an learn.
    Binary numbers are difficult to distinguish with an unmodified game

    peace

  38. petrushka: But I haven’t seen any response to the tone discriminator circuit. It is not a string nor can it be fully described by a string.

    If this is true then the tone discriminator circuit seems to be irrelevant to my endeavor. What am I missing?

    peace

  39. Norm Olsen: Why does this remind me of “Ontogenetic Depth”. Much promised, never delivered.

    The cool thing about this project is that we are waiting for someone on your side of the debate.

    So you don’t have to depend on the evil fundies to get it done for you

    peace

  40. fifthmonarchyman: It follows that changing just one character in a string can radically alter it’s meaning.

    Therefore sentences contain nonlossy integrated information.

    peace

    I’ve come to the tentative conclusion that I have no idea what is expected of anyone. To me, the two sentences had the same meaning, despite the typo. To you, they meant something different. So meaning is something someone attributes to something, and not something it has.

    So if the meaning is lost, is the “information” lost? Depends on the operational definitions of these words. Failing that, we can only fail to communicate.

  41. Flint: So meaning is something someone attributes to something, and not something it has.

    To some extent I agree,

    Another important feature of the game is that we can compare the conclusions of multiple observers with different contradictory biases who each have no idea of the actual assay that is being performed.

    All that the various observers will know is that they are trying to determine if the two strings are distinguishable

    This way we can see if pattern recognition itself is completely subjective or if there are objective elements.

    I am very interested in what we will find.

    peace

Leave a Reply