Working Definitions for the Design Detection Game/Tool

I want to thank OMagain in advance for doing the heavy lifting required to make my little tool/game sharable. His efforts will not only speed the process up immeasurably they will lend some much needed bipartisanship  to this endeavor as we move forward. When he is done I believe we can begin to attempt to use the game/tool to do some real testable science in the area of ID . I’m sure all will agree this will be quite an accomplishment.
Moving forward I would ask that in these discussions we take things slowly doing our best to leave out the usual culture warfare template and try to focus on what is actually being said rather than the motives and implications we think we see behind the words.

 

I believe now would be a good time for us to do some preliminary definitional housework. That way when OMagain finishes his work on the gizmo I can lay out some proposed Hypotheses and the real fun can hopefully start immediately.

 

It is always desirable to begin with good operational definitions that are agreeable to everyone and as precise as possible. With that in mind I would like to suggest the following short operational definitions for some terms that will invariably come up in the discussions that follow.

 

1.      Random– exhibiting no discernible pattern , alternatively a numeric string corresponding to the decimal expansion of an irrational number that is unknown to the observer who is evaluating it

2.       Computable function– a function with a finite procedure (an algorithm) telling how to compute the function.

3.       Artifact– a nonrandom object that is described by a representative string that can’t be explained by a computable function that does not reference the representative string

4.      Explanation –a model produced by a alternative method that an observer can’t distinguish from the string being evaluated

5.       Designer– a being capable of producing artifacts

6.       Observer– a being that with feedback can generally and reliably distinguish between artifacts and models that approximate them

Please take some time to review and let me know if these working definitions are acceptable and clear enough for you all. These are works in progress and I fully expect them to change as you give feedback.

Any suggestions for improvement will be welcomed and as always please forgive the spelling and grammar mistakes.

peace

541 thoughts on “Working Definitions for the Design Detection Game/Tool

  1. fifthmonarchyman: If a computer program can reliably distinguish between artifacts and models it would be an observer acourding to my definition but it might not be a “person”.

    OK, but that seems highly contrived. Are you going to demonstrate that a computer program actually can distinguish between artifacts and models before we start the game? Otherwise, as I insinuated in my previous comment, I fear that you may simply be jigging the rules in favour of your desired outcome.

    In fact, wouldn’t it be better for all of us to examine this tool/game/gizmo and then decide if we agree on your definitions? Because you’re basically asking us to accept your premises in ignorance of your argument.

    In the words of every Star Wars character penned by Lucas … “I have a bad feeling about this.”.

  2. The problem that Dembski address was distinguishing artifacts. Without clear definitions and a pile of examples I call bullshit. This is precisely what is being contested.

  3. Neil Rickert: The only pattern that I see, is that there are 9 symbols with no repeats. If you are seeing a pattern other than that, then it is probably a pattern that depends on other ways that those symbols are used in the culture.

    Excellent insight

    This is what I call the Y axis. I expect we will talk about it when the time comes. no repeats would be lower on the axis than the first 9 integers of the Arabic numerical system .

    Where you are on the axis depends on how much informational context you share with the designer.

    peace

  4. Norm Olsen: Are you going to demonstrate that a computer program actually can distinguish between artifacts and models before we start the game?

    As best I can tell, he will be looking at strings of symbols.

    As I see it, all strings of symbols are artifacts. Symbols themselves are artifacts.

  5. Norm Olsen: Are you going to demonstrate that a computer program actually can distinguish between artifacts and models before we start the game?

    Actually it’s a hypothesis of mine that a computer program can’t do this. Patrick promised to do a little two week hack to falsify this idea but it must be taking a little longer than he thought 😉

    Norm Olsen: Otherwise, as I insinuated in my previous comment, I fear that you may simply be jigging the rules in favour of your desired outcome.

    I’m sorry you feel this way. This sort of suspicion is what you get when the principles in a discussion have such mutually contradictory worldviews. I hope we can get past this by taking our time and defining our terms

    Norm Olsen: In fact, wouldn’t it be better for all of us to examine this tool/game/gizmo and then decide if we agree on your definitions? Because you’re basically asking us to accept your premises in ignorance of your argument.

    Perhaps. Like I told Patrick I was hoping to get a jump on the process so that when OMagain finished we could get right to the hypothesis testing.

    peace

  6. petrushka: The problem that Dembski address was distinguishing artifacts. Without clear definitions and a pile of examples I call bullshit. This is precisely what is being contested.

    I hope to provide clear definitions and examples. This post is the first step in the process

    peace

  7. Neil Rickert: As I see it, all strings of symbols are artifacts. Symbols themselves are artifacts.

    Another excellent insight.
    I would say that symbols and strings are artifacts and at the same time can be used to represent artifacts.

    That is a cool strange and beautiful thing about language.

    peace

  8. petrushka: The paper doesn’t appear to address evolved intelligence.

    I’ve scanned the paper you linked and it seems that this sort of “evolved intelligence” is exactly the sort of thing that is ruled out as an integrating function by the other paper.

    Do you understand why this is?

    peace

  9. You are going to tell us in your own words why evolved intelligence cannot be integrative.

  10. petrushka: You are going to tell us in your own words why evolved intelligence cannot be integrative.

    Because the output of an integrating function is irreducibly complex. That means that all of the elements of the output are necessary in order for it to be valid.

    Returning to the idea of strings if we are looking for the string 123456789 only 123456789 will do as an output.

    Close only counts in horseshoes and hand-grenades.

    If we are looking for 123456789 the string 124345789 is just as wrong as the string 774757890.

    We need the model string to be indistinguishable from the original string in order to have any validity at all.

    Does that make sense?

    peace .

  11. “Because the output of an integrating function is irreducibly complex” is just a jumble of words having no actual meaning.

    It’s just a fancy way of saying Because I say so.

  12. I haven’t followed the earlier discussions on this topic so I’m not clear at all what this is all about. That may explain why I don’t get your definitions?

    Random– exhibiting no discernible pattern , alternatively a numeric string corresponding to the decimal expansion of an irrational number that is unknown to the observer who is evaluating it

    These two alternatives are not at all similar. As others noted, ‘exhibiting no discernible pattern’ is quite subjective and such a definition may easily fail in practice. What to do when two people disagree on there being a discernable pattern or not?

    The problem with a decimal expansion of an irrational number is that it is infinite. Obviously you can’t work an example with an infinite string, so you would have to resort to a subset of an expansion. There will be an infinite number of subsets that to many people will display discernible patterns. For instance, the string ‘12345678’ occurs somewhere in the first 200 million digits of Pi. So you now need more constraints on which subset to pick.

    Why not define ‘random’ operationally? Say, a string of X positions where each position contains an equiprobably selected digit from the set {0,1,2…9}?

    fG

  13. Artifact– a nonrandom object that is described by a representative string that can’t be explained by a computable function that does not reference the representative string

    I am trying really hard to understand what you mean by this. Can you give an example or two?

    fG

  14. Petrushka:

    The problem that Dembski address was distinguishing artifacts.
    Without clear definitions and a pile of examples I call bullshit. This is precisely what is being contested.

    Agree in large measure, even though I’m an ID proponent.

    What is helpful is to state unprovable assumptions, and which of those are widely accepted in math and science, and then those that start to go beyond those that are widely accepted.

    At least within engineering there is the notion of Random Process which comes from statistics. One of my alma maters had this course:
    http://ece.gmu.edu/~bmark/Courses/ece728/728Sp13/728Sp13.htm

    So the notion of Random Process is at least reasonably well defined in a highly respected disciplines (probability and statistics, and electrical and computer engineering).

    Random in that sense does not mean necessarily non-deterministic. A pseudo random number generator is deterministic, but the outputs are not uncertain with respect to the relevant observer of the events. As Lewontin pointed out, whether something random because it is capricious or merely because we don’t have all the data borders on philosophical. Mathematically however, we can model randomness in a way that may not matter much on the answer to Lewontin’s question.

    Perhaps a philosophical debate could be conducted whether the notion of random and the way we model it is “true”. In that sense random may be a formally undefined term and hence the enterprise and validity of modeling random processes mathematically is rooted in some unprovable assumptions. But without these assumptions, pretty much entire scientific, mathematical and engineering disciplines would be non-existent.

    Perhaps closely tied to the notion of random is the notion of uncertainty. One is again is faced with defining uncertainty, and unfortunately this means defining certainty, and well….this will get into arguing circles quickly. The solution is to simply accept the notions as undefined terms within a framework of working hypotheses or axioms and go from there…

    Additionally there is the qualitative notion of Expected Results and the notion of Expected Value or Expectation Values. Unfortunately, the notion of expected value is a little to narrow for statement notions regarding Design. For example, say a signal is either +1 or -1 and the distribution is equi-probable, the expected value is 0. Say the signal has a 99.999999….% chance of being 1 billion volts and a 0.00000…..1% chance of being 0 volts, the expected value is practically 500 million volts. The expected value is thus to coarse a description of what the expected results are. It is better then if we can simply state the estimated distribution in tabular format if at all possible.

    But suffice to say, the notion of random process is reasonably well defined in the literature, and no need to re-invent the wheel. At least qualitatively we can at least in SOME case-by-case scenarios state what are expected results (expected value is a bit too narrow and coarse a definition for ID purposes).

    What I have strenuously objected to is the notion we can have some general algorithm to “reject the chance hypothesis”. Only in certain contexts can we reject the chance hypothesis, and perhaps a framing of the issue is to say “this is a situation where we can state the estimate the outcome is not the result of a random process, or a process that tends to maximize uncertainty over time”.

    As Patrick/Mathgrrl demonstrated, creating some sort of mathematical framework like CSI V2 that not even ID proponents can use, seems pretty misplaced as far as utility. It is better, imho to just work within accepted definitions and actual systems (like the genetic code) than go into almost intractable abstractions that for most practical purposes few people if any can work with.

    At most we can say from widely accepted definitions of science we might formulate an estimate expected results might be.

    Example:

    Given a pre-biotic soup of RNA, DNA, proteins, on a planet the size of Earth, a replication cycle involving possible combinations of these where all the classes of chemicals are represented is expected to be at least as remote a X if not more remote, let us call the probability “the probability of life”. It doesn’t have to be our form of life, just something comparable in complexity to a primitive life form, the most basic self replicator under the most favorable and physically plausible conditions. Probability practically 0 for life emerging on Earth in Earth’s supposed lifetime.

    The probability calculation is an estimate. It can be contested. The disagreement might be the degree of improbability.

    Up until we get to that point, we are still dealing with science.

    Now if we went to man made designs, there are situations we can still might plausibly make an inference as to the probability. The example I suggest is 500 fair coins 100% heads. In like manner we can also make statements about chemical and molecular machines that are man-made. Like what is the probability we’ll have a 100% R-amino acid polymer of 10,000 residues arising and being maintained by random means? Astronomically remote.

    How about finding such a polymer naturally? Presumed astronomically remote since thermal agitation would have destroyed such a configuration typically or dare I say, NATURALLY. Hence we would rightly suspect the polymer is man made since we see men are at least capable in principle of creating such systems (artifacts) and that it is an astronomically atypical configuration.

    We then can say under the right circumstances we can declare something as an atypical or exceptional system versus what we would expect. That is a scientific statement.

    Now when we something exceptional or atypical of what we would expect, we would not immediately infer design. Example: earthquakes or supernova. We have some expectation that given enough time and chance such events will occur naturally.

    500 fair coins 100% heads after being subject to a random process (like vigorous shaking or flipping), we would not expect to happen in the life of the known universe. We would, if we found such a configuration on a table, be inclined to believe a human or some mechanical proxy of human intelligence (a robot) organized the configuration. In like manner we could make such statements of chemical systems at the nano molecular level, at least in principle for certain systems.

    It’s a philosophical question if we were to happen on such an atypical chemical system with no known designer available whether we ought to believe in a designer. I don’t think it is formally possible to assert there is a Designer of life, we can however make testable and falsifiable estimates, or at least reasonably theoretical estimates of how exceptional life (or something comparably complex) is.

    Whether there is a Designer of the first life, unless God shows up and gives a fireworks display, is a matter of faith, imho.

    “If it looks designed, it is designed” is an unprovable proposition, but one I accept as far as the appearance of designs in life.

    So I will not say “we formally detect design”. We can only say we have educated guesses that life is an exceptional phenomenon. Some will say the known universe is big enough to make such an exceptional event inevitable. I personally don’t think so, I certainly wouldn’t wager my soul on such a proposition.

    PS

    I avoid using the term “information” as part of the ID inference. I avoid it like the bubonic plague. Yikes!

  15. Explanation –a model produced by a alternative method that an observer can’t distinguish from the string being evaluated

    I understand what you mean, I think, but why call this an ‘Explanation’? This seems a really strange use of the word, which generally is taken to mean something like ‘a clarification of the causes, context and consequences of a set of facts’. Why not find a better term with less risk of confusion? Perhaps a ‘Fake’?

    fG

  16. fifthmonarchyman,

    Can you not provide an example now to show what you mean? It would help in understanding your definitions.

    I could but it will not make sense to you till you actually see the artifact and nonartifact through the “lens” of the game.

    I struggled with posting definitions before the game was completed but I decided to give it a go now mostly because I was anxious to get started. Perhaps that was a mistake.

    It will be sad if you need to know the whole argument before you can agree on simple definitions

    Documenting all of your definitions and your whole argument at the same time in one post or comment would certainly be the most forthright approach and would allow other people to evaluate it most easily. That’s how scientific papers are written. I can’t think of any reason why you wouldn’t publish that way.

    suppose I had two brand new silver dollars to the unaided observer they are for all intents and purposes indistinguishable but they are not identical.

    Does that help?

    Not really, because you’re talking about strings of characters. With a sufficiently precise balance it’s possible to distinguish between two silver dollars. Two identical strings cannot be distinguished from each other. I’m still not sure what you’re trying to say here. Having the full argument articulated would definitely help.

  17. fifthmonarchyman,

    Norm Olsen: Otherwise, as I insinuated in my previous comment, I fear that you may simply be jigging the rules in favour of your desired outcome.

    I’m sorry you feel this way. This sort of suspicion is what you get when the principles in a discussion have such mutually contradictory worldviews.

    It has nothing to do with worldviews and everything to do with experience. Unfortunately for you, those of us who have been resisting the creationists attempting to push their dogma into public schools have seen many theists try to define their gods into existence. As at least one other person has pointed out in this thread, your definitions are not operational. I, for one, wouldn’t be able to distinguish between an artifact and a non-artifact based on them.

    Please just lay out all your definitions and your full argument in a single post. That alone would distinguish you from many of your brethren.

  18. Designer– a being capable of producing artifacts

    This appears imprecise. If you replace the loaded word ‘being’ with the more neutral ‘entity’, most manufacturing machines would fall under this definition, which I suspect is not what you want?

    fG

  19. Observer– a being that with feedback can generally and reliably distinguish between artifacts and models that approximate them

    Here too, your chosen term does not really fit well with ordinary usage. In ordinary language someone is an observer when they, you know, observe things. Any things. And what has feedback to do with it?

    What you are talking about is more like a device that can select between alternatives based on some set of criteria. A ‘Filter’, perhaps?

  20. petrushka: “Because the output of an integrating function is irreducibly complex” is just a jumble of words having no actual meaning.

    It’s just a fancy way of saying Because I say so.

    Did you read the paper? it will be helpful?

    The way the authors phrased it was

    quote:

    the knowledge of m (z)does not help to describe m(z′),when z and z′are close.

    end quote:

    They go into a lot of detail as to what exactly they are talking about and give mathematical proof.

    They are not discussing strings in particular but cognition in general. I am just trying to extend and test their findings in a particular specific area of numeric strings.

    peace

  21. faded_Glory: What to do when two people disagree on there being a discernable pattern or not?

    If two people disagree the string is random to one and not to the other, If I was to see my social security number presented here is would have meaning to me and I hope would appear to be random digits to everyone else.

    faded_Glory: Why not define ‘random’ operationally? Say, a string of X positions where each position contains an equiprobably selected digit from the set {0,1,2…9}?

    Because as has been pointed out the string 123456789 has the same probability as the string 294755890 but the second string is “random” while the first one is not.

    Do you think you can modify your definition to take this into account?

    Thanks for the interaction

    peace

  22. faded_Glory: I understand what you mean, I think, but why call this an ‘Explanation’?……….Why not find a better term with less risk of confusion? Perhaps a ‘Fake’?

    I like that. It seems to convey what I want to convey

    I want to think about this for a bit but I think I will take your recommendation

    again thanks for the interaction

    peace

  23. The knowledge of x does not help to describe y when x and y are close is exactly the situation in genomes. If x genome is viable, this says nothing about y genome being viable, though they be only a few characters apart.

  24. Patrick: Documenting all of your definitions and your whole argument at the same time in one post or comment would certainly be the most forthright approach and would allow other people to evaluate it most easily. That’s how scientific papers are written. I can’t think of any reason why you wouldn’t publish that way.

    The reason mostly is that I’m waiting on OMagain and I have some time on my hands and would like to have a discussion that does not involve “Christians are poopy heads” and the bot.

    Patrick: With a sufficiently precise balance it’s possible to distinguish between two silver dollars.

    If you recall I said two unaided observers

    Patrick: Two identical strings cannot be distinguished from each other.

    Neither can two sufficiently long random strings that aren’t necessarily identical, Or two sufficiently long strings with the same pattern that differ in small random ways.

    Patrick: I’m still not sure what you’re trying to say here. Having the full argument articulated would definitely help.

    As would the tool. The argument itself won’t make a lot of sense without the tool. As soon as it’s completed you will be able to see this for yourself

    peace

  25. petrushka: The knowledge of x does not help to describe y when x and y are close is exactly the situation in genomes. If x genome is viable, this says nothing about y genome being viable, though they be only a few characters apart.

    In the case of the game we are not talking abut viability but indistinguishably.

    The genome itself is not an observer as far as I can tell. If it was virus infection would be unlikely

    peace

  26. faded_Glory: What you are talking about is more like a device that can select between alternatives based on some set of criteria. A ‘Filter’, perhaps?

    Sort of, but until Patrick has finished his little hack I will continue to assume that the filter is not mechanical or computable.

    peace

  27. fmm,

    I think the OP is following in the steps of some traditional ID. I’ve suggested that it is time to dump some of the traditional ID approaches. The TSZ crowd has some good criticisms the ID community doesn’t appreciate.

    Things like “random” or at least “random process” is already reasonably described in existing literature. No need to re-invent the wheel as far as random.

    You didn’t mention at all expected result or expected value. These are things that we would expect or non-expect in the presence of random processes.

    You could go the easy route and just try to concoct your own definitions and ideas, but it might be more fruitful to work in the framework of what is already accepted. The easy route is to use your own definition of random, I’m suggesting instead you appeal to the slightly broader notion of “random process”. We can at least make some statements of what can or cannot be expected of a random process. This a harder route as it entails actual study of math, engineering, and science literature. But if you’re dealing with an audience versant in these topics, it helps to try to at least meet the audience half-way.

    Stating a string is random is going down the wrong path.

    Instead, I recommend focusing on configurations of PHYSICAL (like dominos and coins and chiral amino acids) rather than conceptual objects (strings like 123…).

    You might be able to say a PHYSICAL configuration:

    1. is
    2. is not
    3. cannot be determined

    as the result of a random process. Going into string analysis (conceptual objects) is the wrong way to go, physical objects ( molecules, atoms, coins, dominos) is a superior approach.

    Finally, you don’t need to go into “computable function”. That just confuses the issue. I recommend deleting it from the list. Avoid “Designer” you don’t need it as a hypothesis or definition to make scientifically valid or at least defensible claims about whether something is the result of a random process or not. Drop it from the list.

    You can use the word artifact I suppose. One can use the word “system” whereby system is a collection of PHYSICAL objects that can be found in various possible physical configurations (I.e. atoms, molecules, coins, dominos, etc.). With the notion of a system with various possible physical configurations we can then say if a particular system configuration is consistent or not consistent with a random process (a process that maximizes the uncertainty of finding a particular configuration over time).

    What I have stated are notions that one can glean from the language of probability, statistics, engineering, statistical mechanics and (gasp) thermodynamics.

    If one goes this route, one might have a better chance of creating a game that has some credibility.

  28. faded_Glory: If you replace the loaded word ‘being’ with the more neutral ‘entity’, most manufacturing machines would fall under this definition, which I suspect is not what you want?

    no I am happy with entity. As long as it can produce an artifact as I’ve defined it.

    I would look at manufacturing machines as tools that are used by designers so perhaps that part of the definition needs some firming up.

    Do you have any suggestions to help me convey what I would like?

    peace

  29. fifthmonarchyman,

    Neither can two sufficiently long random strings that aren’t necessarily identical, Or two sufficiently long strings with the same pattern that differ in small random ways.

    Of course they can. If they’re not identical, they can be distinguished from each other.

    I get the feeling you mean something other than the dictionary definition when you say “distinguish”.

  30. stcordova: Things like “random” or at least “random process” is already reasonably described in existing literature. No need to re-invent the wheel as far as random.

    how about you provide a definition you like and we can discuss it?

    stcordova: Stating a string is random is going down the wrong path.

    Instead, I recommend focusing on configurations of PHYSICAL (like dominos and coins and chiral amino acids) rather than conceptual objects (strings like 123…).

    A string can represent any phyisical configuration and has the advantage of removing context to minimize bias and cheating.

    I think you will understand this when you see the game

    stcordova: Finally, you don’t need to go into “computable function”. That just confuses the issue.

    actually it is the heart of the argument. Did you read the paper?

    stcordova: Avoid “Designer” you don’t need it as a hypothesis or definition to make scientifically valid or at least defensible claims about whether something is the result of a random process or not.

    Random processes is only one part of the equation. We are also talking about computable functions. Again If you read the paper we can discuss more fruitfully I think.

    peace

  31. Patrick: . If they’re not identical, they can be distinguished from each other.

    not in the context of the game.

    As you know in the game all the observer has to go by is moving lines. I can tell you from experience that strings can be quite different and still be indistinguishable when seen in the context of the game

    peace

  32. A string can represent any phyisical configuration and has the advantage of removing context to minimize bias and cheating.

    Disagree. The string.

    H H H H H H H H H. …

    may or may not suggest design if H represents some physical outcome that is observed over time. It is pointless to analyze strings without physical context.

  33. stcordova: may or may not suggest design if H represents some physical outcome that is observed over time. It is pointless to analyze strings without physical context.

    perhaps it might help if you looked at this paper and the game associated with it.

    http://arxiv.org/pdf/1002.4592.pdf

    My game is simply a repurposing

    peace

  34. fifthmonarchyman:Because as has been pointed out the string 123456789 has the same probability as the string 294755890 but the second string is “random” while the first one is not.

    Actually, according to your own example:

    fifthmonarchyman: For example the string 15926535 might appear to be random unless it is seen in the following context 3.14159265359. Which is of course Pi

    …none of those sequences, and no other finite sequence for that matter would be random, because any sequence has a probability = 1 of being contained in a pattern like the infinite digits of PI

  35. fifthmonarchyman: no I am happy with entity. As long as it can produce an artifact as I’ve defined it.

    I would look at manufacturing machines as tools that are used by designers so perhaps that part of the definition needs some firming up.

    Do you have any suggestions to help me convey what I would like?

    peace

    My problem is that I simply don’t understand your definition of artifact.

    I gather that you are trying to avoid the circularity that a designer is what produces artifacts, and artifacts are what is produced by a designer. That is great, but can you give some concrete examples of artifacts according to your definition?

    fG

  36. fifthmonarchyman: If two people disagree the string is random to one and not to the other, If I was to see my social security number presented here is would have meaning to me and I hope would appear to be random digits to everyone else.

    Under this understanding of random, randomness is not a property of an object and you should not be trying to define it as such.

    Because as has been pointed out the string 123456789 has the same probability as the string 294755890 but the second string is “random” while the first one is not.

    Do you think you can modify your definition to take this into account?

    Since you have just said that randomness (sensu fmm) is not a property of an object (or string), randomness would be in the eye of the observer, and your observer is some kind of filter, I think. Are we getting into the domain of signal and noise here? In that case you could define randomness as one type of output from your filter in contrast to other outputs. To go anywhere with this I would need to understand your filter (your ‘Observer’) much better though.

    fG

  37. fmm,

    The question of the financial “game” is a little beside the point. There are professionals who play this game and make lots of money because they play in the real world with their own or Other People’s Money (OPM).

    The professionals who do well actually have a high failure rate of wrong guesses, but the when they are right, they hit it big. It is rooted in Pascals notion of expected value (payoff), Pascal’s wagering strategies.

    The most notable pure quants of a simple pattern recognition system is Richard Dennis:

    https://en.wikipedia.org/wiki/Richard_Dennis

    Richard J. Dennis, a commodities speculator once known as the “Prince of the Pit,”[1] was born in Chicago, in January, 1949. In the early 1970s, he borrowed 1,600 and reportedly made 200 million in about ten years. When a futures trading fund under his management incurred significant losses in the stock market crash of 1987 he retired from trading for several years.[2] He has been active in Democratic and Libertarian political causes, most notably in campaigns against drug prohibition.[3]

    I know one of Dennis’s associates through my blackjack circles, namely, Russell Sands. The general view is that what worked in the Richard Dennis golden age is no longer as viable, the markets adjusted to such simple prediction methods.

    In a few hours, the after-church crowd will start pouring into this and the nine other Tunica casinos, which are situated on the banks of the Mississippi River, smack-dab in the heart of the Bible Belt some 20 miles south of Memphis. “Sin City South,” Tunica is sometimes called.

    Some even incorporate gambling techniques into their trading. Edward O. Thorp, who is an infamous blackjack card counter and author of the best-selling gambling book Beat the Dealer, now manages about 300 million for wealthy investors. And Russell Sands, now an investment newsletter editor and money manager, is a champion blackjack player. Sands was one of the “turtle traders,” a group of 20 commodities traders who posted stellar performances in the 1980s by undertaking risky trading strategies.

    http://www.bloomberg.com/bw/stories/1999-11-07/a-school-for-day-traders

    The guy who trained me in derivatives trading was a Morgan Stanley executive who knows Thorp and Sands personally. I know Sands personally.

    There is no need to build a computer game for the financial markets except may to train someone. But if one is that good, why would you build a game when you can play it and profit from it.

    Thorp played the game (blackjack and wallstreet) as good as anyone except maybe blackjack player Bill Gross. But the king would have to be James Simons at Renaissance technologies:

    https://en.wikipedia.org/wiki/James_Harris_Simons

    James Harris “Jim” Simons (born 1938) is an American mathematician, hedge fund manager, and philanthropist. He is a code breaker and studies pattern recognition.[4] Simons is the co-inventor – with Shiing-Shen Chern – of the Chern-Simons 3-form,Chern and Simons (1974) one of the most important parts of string theory.[5]

    Simons was a professor of mathematics at Stony Brook University and was also the former chair of the Mathematics Department at Stony Brook.

    In 1982, Simons founded Renaissance Technologies, a private hedge fund investment company based in New York with over 25 billion under management. Simons retired at the end of 2009 as CEO of one of the world’s most successful hedge fund companies.[6] Simons’ net worth is estimated to be 14 billion.[1]

    Unfortunately, their best math is not anywhere in view of the public. They are closely guarded trade secrets.

    PS
    Sniff, I miss going to Tunica after the Sunday morning church services. I was forcibly escorted by security and threatened with jail for trespassing if I ever returned to Hollywood Casino. Gee, all I did was use my brain to with the game. That blasted a casino said the love winners — LIARS!

  38. stcordova: Disagree.The string.

    H H H H H HH H H. …

    may or may not suggest design if H represents some physical outcome that is observed over time.It is pointless to analyze strings without physical context.

    To be honest, Sal, it is also pointless to analyse biology in terms of coins and dominoes.

    fG

  39. dazz: …none of those sequences, and no other finite sequence for that matter would be random, because any sequence has a probability = 1 of being contained in a pattern like the infinite digits of PI

    Exactly,

    I don’t believe true randomness exists only apparent randomness. It’s only random from the perspective of the observer.

    stcordova: Disagree. The string.

    H H H H H H H H H. …

    may or may not suggest design if H represents some physical outcome that is observed over time.

    I agree but in the context of my game we could not infer design by that string because it is not sufficiently complex
    more about that later 😉

    peace

  40. stcordova: The question of the financial “game” is a little beside the point. There are professionals who play this game and make lots of money because they play in the real world with their own or Other People’s Money (OPM).

    The point is that these folks are “observers” according to my definition. If Patrick’s little two week hack is successful these folks will be out of a job. 😉

    peace

  41. faded_Glory: Are we getting into the domain of signal and noise here? In that case you could define randomness as one type of output from your filter in contrast to other outputs. To go anywhere with this I would need to understand your filter (your ‘Observer’) much better though.

    Perhaps you are right. I need to think about this for a little before I go about putting my argument together.

    For now I expect we can just specify that random is the product of a standard random number generator

    peace

  42. fifthmonarchyman: Exactly,

    I don’t believe true randomness exists only apparent randomness. It’s only random from the perspective of the observer.

    I agree but in the context of my game we could not infer design by that string because it is not sufficiently complex
    more about that later

    peace

    The problem here is that you have a definition for randomness that negates the possibility of any randomness. Don’t get me wrong but it seems to me like it clearly begs the question for teleology.

    The thing is that there’s no way, AFAIK, to ultimately detect true randomness, but that doesn’t mean that one cannot find a way to generate random outputs. Inability to detect randomness doesn’t imply there’s no such thing

    Patrick:
    Welcome, dazz!

    Thank you Patrick 🙂

  43. dazz:

    The thing is that there’s no way, AFAIK, to ultimately detect true randomness, but that doesn’t mean that one cannot find a way to generate random outputs. Inability to detect randomness doesn’t imply there’s no such thing

    I agree that from our perspective this question is ultimately unknowable.

    What I’m looking for is a definition that works whether or not randomness actually exists sort of like the random with respect to fitness that we have in Darwinism.

    peace

  44. To be honest, Sal, it is also pointless to analyse biology in terms of coins and dominoes.

    fG

    Coins and dominoes are an illustration of physical principles. We could go to actual hypothetical physical molecular systems like 100% R-amino acid polymers (which don’t exist as far as I know, but we can in principle build them).

    We would likely infer intelligent design for 100% R-amino acid polymers if we found them today, but we don’t for 100% L-amino acid polymers even though the same sort of statistical argument can be made (just the flip side of the coin so to speak).

    FIFTHMONARCHYMAN,

    Regarding the paper involving financial markets. There is a high failure rate in some realms of detecting non-random patterns.

    The traditional ID arguments is framed in absolutist terms which I think makes little sense since we are dealing with practically undecidable propositions (unless God gives us a fireworks show and speaks from the heavens).

    I’m Ok with an inference have some chance of being wrong, in the game of finance what matters are the weighted costs of being wrong. The game would be more compelling if there were associated payoffs and costs for being right or wrong respectively. Obviously Ed Throp played the game well and James Simons implemented Pascals wagering strategies especially well. Heavens!

    FYI:
    Thorp published a now obsolete strategy that endeared him to wall street and got his hedgefund started. Even though it is obsolete, it was pattern recognition that put money where Throp’s mouth was. Thorp was a professor of math aat MIT and was a friend and gambling buddy of Claude Shannon’s at MIT. Here is Throps “detection” book, available for free.

    http://edwardothorp.com/sitebuildercontent/sitebuilderfiles/beatthemarket.pdf

    How eventually made the rest of his 300 million is closely guarded trade secret. But his book shows Thorp’s thought process. A beautiful mind indeed!

  45. fifthmonarchyman,

    I remember that paper from when you first mentioned it. There are some issues with it.

    First, their discussion of “integrated information” includes this:

    Imagine that the scented candle factory enhances the artificial smell detector so that now it can distinguish between 1 million different smells, even more than the human nose. Can we now say that the detector is truly smelling chocolate when it outputs chocolate, given that it is producing more information than a human? What is the difference between the detector’s experience and the human experience?

    Like the human nose, the artificial smell detector uses specialized olfactory receptors to diagnose the signature of the scent and then looks it up in a database to identify the appropriate response. However, each smell is responded to in isolation of every other. The exact same response to a chocolate scent occurs even if the other 999,999 entries in the database are deleted. The factory might as well have purchased a million independent smell detectors and placed them together in the same room, each unit independently recording and responding to its own data.

    What if instead of a database lookup the artificial smell detector trained a neural network to distinguish between the smells? The weights of the connections between each node would seem to meet the criteria for integrated information, by the definition used in the paper. Do you agree?

    Shortly after that section is this discussion of the XOR function:

    Griffith (2014) rebrands the informational difference between a whole and the union of its parts as ‘synergy’. He presents the XOR gate as the canonical example of synergistic (i.e. integrated) information. Consider, for example, a XOR gate with two inputs, X1 and X2, which can be interpreted as representing a stimulus and an original brain state. They combine integratively to yield Y , the resultant brain state which encodes the stimulus. Given X1 and X2 in isolation we have no information about Y . The resultant brain state Y can only be predicted when both components are taken into account at the same time. Given that the components X1 and X2 do not have any independent causal influence on Y, all of the information about Y here is integrated.

    Interestingly, training a neural network to calculate XOR is one of the first tasks assigned to students of machine learning. This makes it even more clear that the state of a trained neural network meets the definition of integrated information.

    Second, and a more serious problem, is the discussion of human memory:

    While it seems intuitive for the brain to discard irrelevant details from sensory input, it seems undesirable for it to also hemorrhage meaningful content. In particular, memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would cause them to gradually decay.

    We propose that the information integration evident in cognition is not lossy.

    As noted the last time you discussed this paper, this is not how human brains work. There are a number of papers available on the topic. This one from Psychology Today summarizes what is actually the case:

    Not only are our memories faulty (anyone who has uncovered old diaries knows that), but more importantly Schiller says our memories change each time they are recalled.

    So memory functions are definitely not “vastly non-lossy” as assumed in the paper. That has a large impact on their conclusions. After some talk about strings and Kolmogorov complexity, the authors come to one of their more important points:

    In this section we prove an interesting result using the above definition, namely that lossless information integration cannot be achieved by a computable process.

    They then draw conclusions about human brains based on that. Leaving aside whether or not their math means what they say it means, the fact that human memories are not lossless means that this result has no bearing on whether “brain processes can be modelled computationally.”

  46. fifthmonarchyman:
    For now I expect we can just specify that random is the product of a standard random number generator

    That would be very clear. Of course it means that your random strings will contain some ‘patterns’ every now and then.

    I hope that your tool does more than detect white noise from signal in a time series?

    fG

  47. Please take some time to review and let me know if these working definitions are acceptable and clear enough for you all.

    I don’t think they are clear enough and I’ve suggested alternate approaches. It doesn’t work for me, but what matters is if they work for the would-be participants or users of your game. I’m an ID proponent, so what I’ve said comes from someone friendly to the design detection side of the aisle.

    FWIW, I conducted design detection games in extra curricular contexts with science and other students:

    To recognize design is to recognize products of a like-minded process, identifying the real probability in question, Part I

  48. As Patrick points out, one problem with The Paper is it’s simply wrong about how brains work.

Leave a Reply