• Re: is the ct-thesis cooked?

    From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 07:06:54 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's
    assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has 0.001
    probability, not a 100% guarantee as many engineers perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist a
    partial decider that can decide on it ... because otherwise a paradox
    would have to address all possible partial deciders in a computable
    fashion and that runs up against it's own limit to classical
    computing. therefore some true decider must exist for any given
    machine that exists ... we just can't funnel the knowledge thru a
    general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing Machines.

    All it needs is that the decider be limited by the rules of a computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a
    non-answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't understand
    the basics of what a computation is.


    i think the actual problem is the TM computing is not sufficient to
    describe all computable relationships. TM computing is considered the
    gold-standard for what is computable, but we haven't actually proved
    that.

    the CT-thesis is a thesis, not a proof. we've been treating it as a
    law ... but we never actually justified that it should be law. this
    whole time we've been discarding things like a general halting
    decidable because TM computing can be used to create paradoxes in
    regards to it, but maybe the problem is that TM computing is not
    sufficient to describe a general halting decider, not that a general
    halting decider is impossible.

    that's my new attack vector on the consensus understanding: the CT
    thesis. i am to describe a general algo that *we* can obviously
    compute using deterministic steps, but such algo cannot be funneled
    thru a general interface because TM computing will read and paradox it.


    On 12/11/2025 12:03 AM, polcott wrote:
    On 12/10/2025 4:58 PM, wij wrote:
    On Wed, 2025-12-10 at 16:43 -0600, polcott wrote:
    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine
    this is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.

    If you honestly admit you are solving POO Problem, everything is
    fine.


    *It has take me 21 years to boil it down to this*

    When the halting problem requires a halt decider
    to report on the behavior of a Turing machine this
    is always a category error.

    The corrected halting problem requires a Turing
    machine decider to report in the behavior that
    its finite string input specifies.








    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 07:12:07 2026
    From Newsgroup: comp.ai.philosophy

    On 1/6/26 11:33 PM, olcott wrote:
    On 1/6/2026 9:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's >>>>> assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive it. >>>>
    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist
    a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any
    given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    No it is not that.
    After spending 20,000 hours on this over 20 years
    equivalent to ten full time years.

    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*
    *if undecidability is correct then truth itself is broken*

    Maybe what is broken is your concept of truth, because it is just wrong.




    The simplest 100% correct resolution to the
    actual definition of the Halting Problem
    (that includes the counter-example input)
    Is that (in the case of the counter-example input)
    The halting problem asks a yes/no question
    that has no correct yes/no answer.

    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*
    *The HP asks an incorrect question*

    But it doesn't.

    Your "logic" is based on FALSE concepts, like subjecteive questions are
    valid.


    We can only get to your idea of a different
    interface when we change the definition of
    that Halting Problem. The original problem
    itself is simply incorrect.

    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*
    *I proved the HP input is the same as the Liar Paradox back in 2004*

    Nope.

    function LoopIfYouSayItHalts (bool YouSayItHalts):
      if YouSayItHalts () then
        while true do {}
      else
        return false;

    Does this program Halt?

    (Your (YES or NO) answer is to be considered
     translated to Boolean as the function's input
     parameter)

    Please ONLY PROVIDE CORRECT ANSWERS!

    https://groups.google.com/g/sci.logic/c/Hs78nMN6QZE/m/ID2rxwo__yQJ



    WHich just shows you didn't know what the Halting Problem was then, and
    you still don't as the halting problem isn't based on TELLING the
    machine you answer, but on predicting the behavior of the input program
    for its specific input.

    LoopIfYouSayHalts(false) is halting.
    LoopIfYouSayHalts(true) is non-halting.

    One answer per input is the specification of the problem, and there is a decider that can give the answer.

    Note also, you above program is just an invalid syntax error as YouSayIt
    Halts is defined to be a bool, but then you "call" it in a malformed if statement whose opening parenthesis is misplaced.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 14:09:07 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them
    perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because it's >>>>> assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation.
    And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive it. >>>>
    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for
    all machiens ...

    heck it must be certain that for any given machine there must exist
    a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any
    given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a non-
    answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't understand
    the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are deciding on ... ofc deciders can modify their behavior based on an input
    which they are included within, like they can modify their behavior
    based on the properties of the input.

    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when encountering a paradox specific to their interface.

    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS, LOOPS,
    and PARADOX. when partial deciders receive PARADOX back from their algo
    run they will then just loop forever to never respond.

    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    "but i can loop over all partial deciders to produce a paradox" ... uhh
    no you can't? traditional computing cannot iterate over all functionally equivalent machines, so it certainly can't iterate over all almost functionally equivalent machines, so you cannot claim to produce a
    general paradox for the general algo as such a computation is outside
    the scope of classical computing limits.

    i await to see how you purposefully misunderstand this
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 22:16:48 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>> perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because
    it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. >>>>>> And even when one hits it, they can safely discard a particular
    analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a
    dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers perceive >>>>>> it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for
    any given machine, just that a "general" decider does not exist for >>>>> all machiens ...

    heck it must be certain that for any given machine there must exist >>>>> a partial decider that can decide on it ... because otherwise a
    paradox would have to address all possible partial deciders in a
    computable fashion and that runs up against it's own limit to
    classical computing. therefore some true decider must exist for any >>>>> given machine that exists ... we just can't funnel the knowledge
    thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit


    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error that
    the decider can be changed after the fact and such change changes the
    input to match, but that breaks the fundamental property of
    computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made from
    will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so there
    is nothing wrong about the question of its halting, and thus a non-
    answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are deciding on ... ofc deciders can modify their behavior based on an input which they are included within, like they can modify their behavior
    based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    And nothing in the theory disagrees with partial halt deciders existing,
    they just can NEVER give an answer to the pathological program based on
    them, as if they give an answer, it will be wrong.


    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS, LOOPS,
    and PARADOX. when partial deciders receive PARADOX back from their algo
    run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it
    can't determine if the input is just using a computational equivalent of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give the
    right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that implements it.


    "but i can loop over all partial deciders to produce a paradox" ... uhh
    no you can't? traditional computing cannot iterate over all functionally equivalent machines, so it certainly can't iterate over all almost functionally equivalent machines, so you cannot claim to produce a
    general paradox for the general algo as such a computation is outside
    the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    The pathological program doesn't need to enumerate the deciders, it just
    needs to user what you make your final decider, which can only partially enumerate the partial deciders.


    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that
    depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Mon Jan 12 20:21:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on them >>>>>>> perceive the halting problem as a dogma. As result, certain
    practical things (in code analysis) are not even tried because
    it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a limitation. >>>>>>> And even when one hits it, they can safely discard a particular >>>>>>> analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has
    0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists for >>>>>> any given machine, just that a "general" decider does not exist
    for all machiens ...

    heck it must be certain that for any given machine there must
    exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial
    deciders in a computable fashion and that runs up against it's own >>>>>> limit to classical computing. therefore some true decider must
    exist for any given machine that exists ... we just can't funnel
    the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit >>>>

    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental property
    of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for ANY
    specific decider, and thus no decider can get all answers correct.

    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus a
    non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they are
    deciding on ... ofc deciders can modify their behavior based on an
    input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when
    encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)


    And nothing in the theory disagrees with partial halt deciders existing, they just can NEVER give an answer to the pathological program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a method
    that does correctly decide on it without ever given any wrong answers to
    any other machine (tho it may not give answers at times).

    as a man, i'm not subject to having my output read and contradicted like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because some
    method must exist that can correct decide on any given input... i can
    know that method for any given input, and therefor i can decide on any
    given input.

    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why am
    i limited by the absolute nonsense that is turing machines producing pathological input to themselves?

    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation cannot
    be generally expressed thru a turing machine computation input/ouput specification.



    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    can't determine if the input is just using a computational equivalent of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing


    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is either
    it returns PARADOX in which case the partial decider decides to loop(),
    or maybe we can just extract that functionality into the general partial
    algo itself...

    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that implements it.

    sure, but all the partial decider do is construct a self-reference using
    a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in order
    to ascertain where their output feedbacks into affect the prediction
    they are making.

    and yes the partial deciders do contain full descriptions of the common backing algo, but they still really do just act as an interface to that
    common algo

    they act like an exposed API/interface into the common algo



    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary


    The pathological program doesn't need to enumerate the deciders, it just needs to user what you make your final decider, which can only partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the self
    acts as the fixed point of reference to which the decision is made ...
    and while no fixed point can decide on all input, for any given input
    there is a fixed point of self that can decide on that input. and this
    can be encapsulated into a general algorithm that encapsulated a general procedure even if any given fixed point of self is not general.

    therefore a general algo exists, even if any particular fixed point of decision making is contradicted.

    dear god, why is computing theory is such a shit show? cause we've been
    almost blindly following what the forefathers of computing said?



    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Tristan Wibberley@tristan.wibberley+netnews2@alumni.manchester.ac.uk to comp.ai.philosophy,sci.lang on Tue Jan 13 07:02:46 2026
    From Newsgroup: comp.ai.philosophy

    On 13/01/2026 04:21, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    This is the god complex. While it's true that definition is a volition,
    like transforming into a jupiter-sized victoria sponge it is not always
    a free choice.

    Also, people often use "define" to mean "define a constraint for", you
    can define a constraint as freely as you can describe it without needing
    the god complex but perhaps there is no definition of a solution for a
    system of constraints that includes it.

    I want to know the right terminology for what you did in your statement
    "i'm defining the algo, so i say it does": you have one meaning for
    "defining" in "i'm defining the algo" which is "defining a constraint
    for", but a different meaning in a re-interpretation of the supposed
    world that's referenced by "so" in "so i say it does" which is actual
    defining.
    --
    Tristan Wibberley

    The message body is Copyright (C) 2026 Tristan Wibberley except
    citations and quotations noted. All Rights Reserved except that you may,
    of course, cite it academically giving credit to me, distribute it
    verbatim as part of a usenet system or its archives, and use it to
    promote my greatness and general superiority without misrepresentation
    of my opinions other than my opinion of my greatness and general
    superiority which you _may_ misrepresent. You definitely MAY NOT train
    any production AI system with it but you may train experimental AI that
    will only be used for evaluation of the AI methods it implements.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 13 07:09:44 2026
    From Newsgroup: comp.ai.philosophy

    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get
    discarded from the very beginning because people who work on
    them perceive the halting problem as a dogma. As result, certain >>>>>>>> practical things (in code analysis) are not even tried because >>>>>>>> it's assumed that they are bound by the halting problem.

    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard a >>>>>>>> particular analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has >>>>>>>> 0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists
    for any given machine, just that a "general" decider does not
    exist for all machiens ...

    heck it must be certain that for any given machine there must
    exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial
    deciders in a computable fashion and that runs up against it's
    own limit to classical computing. therefore some true decider
    must exist for any given machine that exists ... we just can't
    funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic limit >>>>>

    Nope, because the proof doesn't actually need to talk about HOW the
    decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental property
    of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for
    ANY specific decider, and thus no decider can get all answers correct. >>>>
    That the input HAS a correct answer (just the opposite of what that
    specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus a
    non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on
    an input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding to
    input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't be
    more efficient that brute force, because again, they can block when
    encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable
    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem, as
    it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a method that does correctly decide on it without ever given any wrong answers to
    any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to use it.


    as a man, i'm not subject to having my output read and contradicted like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because some method must exist that can correct decide on any given input... i can
    know that method for any given input, and therefor i can decide on any
    given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    I guess you have fallen for the Olcott trap of convinsing yourself that
    you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why am
    i limited by the absolute nonsense that is turing machines producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.


    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation cannot
    be generally expressed thru a turing machine computation input/ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input, doesn't
    mean it can detect the other "self" that exists in the paradox input.

    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm gives.

    That doesn't make the name for that behavior "Paradox", it will still be
    one of "Halting" or "Non-Halting", so the third answer can't be correct.


    can't determine if the input is just using a computational equivalent
    of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    I guess your problem is you don't understand what an actual ALGORITHM is.



    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is either
    it returns PARADOX in which case the partial decider decides to loop(),
    or maybe we can just extract that functionality into the general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    You don't seem to understand that LYING is just LYING and incorrect.

    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one of
    them will answer is just a lie.


    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference using
    a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in order
    to ascertain where their output feedbacks into affect the prediction
    they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect themselves)
    will detect themselves and fail to answer.



    and yes the partial deciders do contain full descriptions of the common backing algo, but they still really do just act as an interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common algorithm, and it can NEVER give the right answer.




    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    Your enumerable set of partial deciders will just never give an answer,
    and thus you can't say that some partial decider can answer for every
    possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the self
    acts as the fixed point of reference to which the decision is made ...
    and while no fixed point can decide on all input, for any given input
    there is a fixed point of self that can decide on that input. and this
    can be encapsulated into a general algorithm that encapsulated a general procedure even if any given fixed point of self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the conclusion.


    dear god, why is computing theory is such a shit show? cause we've been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.




    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior that
    depends on the input, and ALWAYS have that behavior for that input.

    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different answer.

    No machine has "Paradoxical" halting behavior as its specific behavior.
    It might be described as contrary to a given instance of a general
    algorithm, but all that shows was that algorithm was WRONG, as it still
    had specific behavior as to its actual behavior.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Tue Jan 13 12:33:18 2026
    From Newsgroup: comp.ai.philosophy

    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>> them perceive the halting problem as a dogma. As result,
    certain practical things (in code analysis) are not even tried >>>>>>>>> because it's assumed that they are bound by the halting problem. >>>>>>>>>
    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard >>>>>>>>> a particular analysis branch by marking it as inconclusive.

    Halting problem for sure can be better framed to not sound as a >>>>>>>>> dogma, at least. In practice, algorithmic inconclusiveness has >>>>>>>>> 0.001 probability, not a 100% guarantee as many engineers
    perceive it.

    god it's been such a mind-fuck to unpack the halting problem,

    but the halting problem does not mean that no algorithm exists >>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>> exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial >>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular
    interface is a flaw of TM computing, not an inherent algorithmic
    limit


    Nope, because the proof doesn't actually need to talk about HOW the >>>>> decider actually made its decision, and thus not limited to Turing
    Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error
    that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made
    from will get the wrong answer, and we can make such an input for
    ANY specific decider, and thus no decider can get all answers correct. >>>>>
    That the input HAS a correct answer (just the opposite of what that >>>>> specific decider gives) shows that there IS a correct answer, so
    there is nothing wrong about the question of its halting, and thus
    a non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on
    an input which they are included within, like they can modify their
    behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding
    to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't
    be more efficient that brute force, because again, they can block
    when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable

    infinite loops in turing complete system are also fully enumerable just
    like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of time
    using brute force.

    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem, as
    it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a
    method that does correctly decide on it without ever given any wrong
    answers to any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    those are not valid partial deciders, why are you still bringing up red herrings?


    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to use it.

    and where's the proof i must enumerate all partial deciders in order to
    know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input... i
    can know that method for any given input, and therefor i can decide on
    any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid computation?


    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving the
    input ...


    I guess you have fallen for the Olcott trap of convinsing yourself that
    you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why
    am i limited by the absolute nonsense that is turing machines
    producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting


    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law because
    a bandwagon has convinced u



    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you are
    treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century. i
    don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation input/
    ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input
    which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from
    their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input, doesn't
    mean it can detect the other "self" that exists in the paradox input.

    again circular logic


    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm gives.

    That doesn't make the name for that behavior "Paradox", it will still be
    one of "Halting" or "Non-Halting", so the third answer can't be correct.


    can't determine if the input is just using a computational equivalent
    of itself that doesn't match its idea of what it looks like.

    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is going
    on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down


    I guess your problem is you don't understand what an actual ALGORITHM is.

    gaslighting again, why u must argue like a child?




    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides to
    loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.


    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation paradigm to fit the pigeonhole fallacy ur now making

    this is a problem with assuming the CT-thesis is correct.


    It seems that the result of your description is that ALL your partial deciders are going to just loop forever, and thus your claim that one of them will answer is just a lie.

    no



    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and it
    passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine, so
    you are just showing you are fundamentally incorrect in your basis.

    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is enumerating all those deciders, everyone (if they can detect themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an interface
    to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed points,
    the pathological input can only counter a subset of fixed points under classical limitations to computing.





    "but i can loop over all partial deciders to produce a paradox" ...
    uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate over
    all almost functionally equivalent machines, so you cannot claim to
    produce a general paradox for the general algo as such a computation
    is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary


    Your enumerable set of partial deciders will just never give an answer,
    and thus you can't say that some partial decider can answer for every possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of self
    is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point of
    decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh





    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that input. >>>
    The definition of a computation means it can't squirrel away the fact
    that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different answer.

    No machine has "Paradoxical" halting behavior as its specific behavior.
    It might be described as contrary to a given instance of a general algorithm, but all that shows was that algorithm was WRONG, as it still
    had specific behavior as to its actual behavior.
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Wed Jan 14 22:43:35 2026
    From Newsgroup: comp.ai.philosophy

    On 1/13/26 3:33 PM, dart200 wrote:
    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>>> them perceive the halting problem as a dogma. As result,
    certain practical things (in code analysis) are not even tried >>>>>>>>>> because it's assumed that they are bound by the halting problem. >>>>>>>>>>
    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely discard >>>>>>>>>> a particular analysis branch by marking it as inconclusive. >>>>>>>>>>
    Halting problem for sure can be better framed to not sound as >>>>>>>>>> a dogma, at least. In practice, algorithmic inconclusiveness >>>>>>>>>> has 0.001 probability, not a 100% guarantee as many engineers >>>>>>>>>> perceive it.

    god it's been such a mind-fuck to unpack the halting problem, >>>>>>>>>
    but the halting problem does not mean that no algorithm exists >>>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>>> exist a partial decider that can decide on it ... because
    otherwise a paradox would have to address all possible partial >>>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular >>>>>>> interface is a flaw of TM computing, not an inherent algorithmic >>>>>>> limit


    Nope, because the proof doesn't actually need to talk about HOW
    the decider actually made its decision, and thus not limited to
    Turing Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error >>>>>> that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made >>>>>> from will get the wrong answer, and we can make such an input for >>>>>> ANY specific decider, and thus no decider can get all answers
    correct.

    That the input HAS a correct answer (just the opposite of what
    that specific decider gives) shows that there IS a correct answer, >>>>>> so there is nothing wrong about the question of its halting, and
    thus a non- answer like "its behavior is contrary" is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they
    are deciding on ... ofc deciders can modify their behavior based on >>>>> an input which they are included within, like they can modify their >>>>> behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by
    the "pathological" input.


    this is how partial deciders can intelligently block on responding
    to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't
    be more efficient that brute force, because again, they can block
    when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded space), >>>
    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space)

    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable

    infinite loops in turing complete system are also fully enumerable just
    like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of time using brute force.

    You seem to confuse "enumerable" with effectively enumerable.

    Enumerable means that via the process of the Axiom of Choice, some
    enumeration is possible to be found. Doing this might require having a
    "God Function" to determine if a given candidate is part of the set.

    Just because an infinite set is enumberable, doesn't mean that you CAN
    create that set in a computation.

    While the looping machines are enumerable, that doesn't mean you can
    generate a set of all such machines.

    Unless you can actually SHOW how to DECIDE on this, in countable
    unbounded time, you can't claim it.

    THe problem is that while N is enumerable, and N^k is enumberable, 2^N
    is not.

    Your problem is you can't tell when you have simulated a machine long
    enough to say that it is no longer a bounded space machine, and thus
    "looping" is not deciable, only recognizable.


    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem,
    as it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there that
    must decide correctly on it. so, for any given machine there is a
    method that does correctly decide on it without ever given any wrong
    answers to any other machine (tho it may not give answers at times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we just
    don't know.

    those are not valid partial deciders, why are you still bringing up red herrings?

    Because you keep on trying to assert them.

    You claim that for any given machine, there must be a machine to
    correctxl decide it.

    If you don't mean those trivial decider, PROVE your claim. You just
    assert without any proof.

    Note, perhaps you are overlooking the implied requirement, we must KNOW
    that it is correct for that input, otherwise we get to the trivial
    variation of the above, that check if it is THIS machine, and if so one
    says halting and the other says non-halting, and if it is any other
    machine, just keep looping.

    I hope you admit that this would not be a valid answer to you claim.



    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to
    use it.

    and where's the proof i must enumerate all partial deciders in order to
    know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input...
    i can know that method for any given input, and therefor i can decide
    on any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid computation?

    Because you are not fixed and deterministic. If I can't build a
    computation on you, you are not a computation, as that is one of the fundamentals of a computation.



    So, all you are stating is you are too stupid to understand the nature
    of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving the input ...

    "Get the correct answer" is not an algorithm, and resorting to it just
    shows you don't know what you are talking about.

    You "logic" is apparently still based on lying to yourself that you can
    do it, believing that lie, and then living in your stupidity,.

    If you could algorithmically determine the correct method, then the
    input could have done the same algorithm to determine the method you
    will uses, and then break that result.

    All you are doing is showing you don't understand what algorithmic means.



    I guess you have fallen for the Olcott trap of convinsing yourself
    that you are God.


    this is why i'm really starting to think the ct-thesis is cooked. you
    say i can't do that because turing machines can't do that ... but
    where's the proof that turing machine encompass all of computing? why
    am i limited by the absolute nonsense that is turing machines
    producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting

    Nope, as you keep on making statements that are just incorrect about computations.

    Name calling rather than showing the actual error just shows that you
    are out of lies to use.



    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law because
    a bandwagon has convinced u

    The "Halting Problem" is specifically written about Turing Machines, and
    thus doesn't depend on CT.

    On the assumption of CT, it can be extended to all computations.

    Since no no method of compuation has been found that allows us to
    COMPUTE beyond what a Turing Machine can do, means as far as we know the generalization holds.

    Note, what is a "Compuation" is rigidly defined, and isn't done by
    reference to a Turing Machine. This includes that an algorithm that
    performs a computation is based on a series of finitely and
    deterministically defined operations. This means that given a specific
    input, that algorithm will ALWAYS produce the same results.




    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you
    are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century.
    i don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation
    input/ ouput specification.

    No, the problem is you don't understand what computating is, and thus,
    just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference

    That doesn't make sense, since compuations are fixed.

    All you are doing is showing you just don't fundamentally understand
    what compuation is about, perhaps because you (like Olcott) think that 'algorithms' can actually 'think' or 'decide' rather than just 'compute'.


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the
    partial deciders, all the general algorithm needs is a "self" input >>>>> which identifies the particular interface it's computing for. this
    general algo for partial deciders will have three outputs: HALTS,
    LOOPS, and PARADOX. when partial deciders receive PARADOX back from >>>>> their algo run they will then just loop forever to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input,
    doesn't mean it can detect the other "self" that exists in the paradox
    input.

    again circular logic

    Really? Try to show it wrong.

    Give the definite algorithm that can detect its functional equivalent in
    the paradox input.



    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm
    gives.

    That doesn't make the name for that behavior "Paradox", it will still
    be one of "Halting" or "Non-Halting", so the third answer can't be
    correct.


    can't determine if the input is just using a computational
    equivalent of itself that doesn't match its idea of what it looks like. >>>>
    This FACT just breaks you concept, as you just assume you can detect
    what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is
    going on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down

    No, you assume it is computable, and show that given that false
    assumption you can show that it should be.

    This is why you can't actually present your "base algorithm" because you
    need to keep changing it to handle the pathological input that gets to
    know your algorithm.



    I guess your problem is you don't understand what an actual ALGORITHM is.

    gaslighting again, why u must argue like a child?


    No, it seems it is YOU that has been inhaling too much gas.

    Since you clearly can't show what you claim.




    And, even if it CAN detect that the input is using a copy of itself,
    that doesn't help it as it still can't get the right answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides
    to loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that can't
    be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.

    Nope. There is no halting behavior of "Paradox". As all machines either
    halt or not.

    The best you can do is not answer, but that doesn't help you, as you end
    up not answering to a lot of inputs.



    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation paradigm to fit the pigeonhole fallacy ur now making

    They are the most powerful machines we know of, so I guess you are just
    saying that we don't know how to actually compute.


    this is a problem with assuming the CT-thesis is correct.

    No, your fallacy is assuming it is incorrect, and that you have an
    unknown system that is better.

    You WANT there to be something better, and if you could actually create
    one, the world might beat a path to your door (or try to annalate you
    for breaking too many existing systems).

    It was thought that Quantum Computing might do it, but so far we haven't
    found it able to do anything unique, just do somethings that were know
    to be computable, just impractical.



    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one
    of them will answer is just a lie.

    no

    So, which of your deciders is going to answer for the unbounded space
    machine that has no decernable patterns in its growth?




    pathological input based on your general algorithm effectively uses
    copies of all the algorithms it enumerates, so NONE of them can give
    the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial
    decider, and that's what i mean by passing in a self. the partial
    decider must have a quine that allows it to recognize itself, and
    it passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine,
    so you are just showing you are fundamentally incorrect in your basis. >>>>
    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect
    themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an
    interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common
    algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed points,
    the pathological input can only counter a subset of fixed points under classical limitations to computing.

    No such thing.

    The problem is that the "pathological input" isn't the only input that
    causes problems, just a simple to show one.

    There are other problems (like Busy Beaver) that are shown to be
    fundamentally uncomputable, and thus can be a base for making halting undecidable.






    "but i can loop over all partial deciders to produce a paradox" ... >>>>> uhh no you can't? traditional computing cannot iterate over all
    functionally equivalent machines, so it certainly can't iterate
    over all almost functionally equivalent machines, so you cannot
    claim to produce a general paradox for the general algo as such a
    computation is outside the scope of classical computing limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary

    But just ASSUMING you can find the right one, IF it exists.

    Maybe you can say that all machines of the given pathological structure
    of the proof can be decided by some other partial decider, that is a
    very different thing than that ALL machines can be decided by some
    partial decider.



    Your enumerable set of partial deciders will just never give an
    answer, and thus you can't say that some partial decider can answer
    for every possible input.



    The pathological program doesn't need to enumerate the deciders, it
    just needs to user what you make your final decider, which can only
    partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of
    self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point
    of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the
    conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh

    Pot calling the kettle black.

    You are still showing that you don't know what compuations actually are,
    as it seems your thesis is that there must be a system more powerful
    than Turing Machines, even if you have no idea how it would work,






    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that
    input.

    The definition of a computation means it can't squirrel away the
    fact that it was once used on this particular input, and needs to do
    something different, which is what is needed to CHANGE their behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that
    limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a problem,
    we can't say that the right answer doesn't exist just because a
    DIFFERENT problem based on a DIFFERENT algorithm gives a different
    answer.

    No machine has "Paradoxical" halting behavior as its specific
    behavior. It might be described as contrary to a given instance of a
    general algorithm, but all that shows was that algorithm was WRONG, as
    it still had specific behavior as to its actual behavior.



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Thu Jan 15 04:23:35 2026
    From Newsgroup: comp.ai.philosophy

    On 1/14/26 7:43 PM, Richard Damon wrote:
    On 1/13/26 3:33 PM, dart200 wrote:
    On 1/13/26 4:09 AM, Richard Damon wrote:
    On 1/12/26 11:21 PM, dart200 wrote:
    On 1/12/26 7:16 PM, Richard Damon wrote:
    On 1/12/26 5:09 PM, dart200 wrote:
    On 1/12/26 4:06 AM, Richard Damon wrote:
    On 1/6/26 10:03 PM, dart200 wrote:
    On 1/6/26 5:26 PM, olcott wrote:
    On 1/6/2026 1:47 AM, dart200 wrote:
    On 1/5/26 4:24 PM, Oleksiy Gapotchenko wrote:
    Just an external observation:

    A lot of tech innovations in software optimization area get >>>>>>>>>>> discarded from the very beginning because people who work on >>>>>>>>>>> them perceive the halting problem as a dogma. As result, >>>>>>>>>>> certain practical things (in code analysis) are not even >>>>>>>>>>> tried because it's assumed that they are bound by the halting >>>>>>>>>>> problem.

    In practice, however, the halting problem is rarely a
    limitation. And even when one hits it, they can safely
    discard a particular analysis branch by marking it as
    inconclusive.

    Halting problem for sure can be better framed to not sound as >>>>>>>>>>> a dogma, at least. In practice, algorithmic inconclusiveness >>>>>>>>>>> has 0.001 probability, not a 100% guarantee as many engineers >>>>>>>>>>> perceive it.

    god it's been such a mind-fuck to unpack the halting problem, >>>>>>>>>>
    but the halting problem does not mean that no algorithm exists >>>>>>>>>> for any given machine, just that a "general" decider does not >>>>>>>>>> exist for all machiens ...

    heck it must be certain that for any given machine there must >>>>>>>>>> exist a partial decider that can decide on it ... because >>>>>>>>>> otherwise a paradox would have to address all possible partial >>>>>>>>>> deciders in a computable fashion and that runs up against it's >>>>>>>>>> own limit to classical computing. therefore some true decider >>>>>>>>>> must exist for any given machine that exists ... we just can't >>>>>>>>>> funnel the knowledge thru a general interface.


    For every H there is a D such that D does the opposite
    of whatever H reports. In this case use H1 on this D.

    yes, the inability to correctly resolve halting thru a singular >>>>>>>> interface is a flaw of TM computing, not an inherent algorithmic >>>>>>>> limit


    Nope, because the proof doesn't actually need to talk about HOW >>>>>>> the decider actually made its decision, and thus not limited to >>>>>>> Turing Machines.

    All it needs is that the decider be limited by the rules of a
    computation.

    All the arguements against the proof seem to begin with the error >>>>>>> that the decider can be changed after the fact and such change
    changes the input to match, but that breaks the fundamental
    property of computations, that they are fixed algorithms

    The proof shows that the SPECIFIC decider that the input was made >>>>>>> from will get the wrong answer, and we can make such an input for >>>>>>> ANY specific decider, and thus no decider can get all answers
    correct.

    That the input HAS a correct answer (just the opposite of what
    that specific decider gives) shows that there IS a correct
    answer, so there is nothing wrong about the question of its
    halting, and thus a non- answer like "its behavior is contrary" >>>>>>> is valid.

    Everyone trying to make the arguements just shows they don't
    understand the basics of what a computation is.

    missed ya dick!

    given that deciders are inherently part of the execution path they >>>>>> are deciding on ... ofc deciders can modify their behavior based
    on an input which they are included within, like they can modify
    their behavior based on the properties of the input.

    No, that behavior had to always have been in them, and thus seen by >>>>> the "pathological" input.


    this is how partial deciders can intelligently block on responding >>>>>> to input that cannot be answered thru their particular interface.

    i'm not aware of any method that can prove a partial decider can't >>>>>> be more efficient that brute force, because again, they can block >>>>>> when encountering a paradox specific to their interface.

    How does "brute force" determine non-halting?

    well it does for the halting computations (bounded time, bounded
    space),

    and proper infinite loops (unbounded time, bounded space),

    just not runaway infinite computation (unbounded time, unbounded space) >>>
    Which exist and is a form of non-halting.

    Yes, Non-Turing Complete systems, with bounded space, are Halt Decidable >>
    infinite loops in turing complete system are also fully enumerable
    just like halting machines are. they will always result in repeat
    configurations, and this is decidable within an unbounded amount of
    time using brute force.

    You seem to confuse "enumerable" with effectively enumerable.

    Enumerable means that via the process of the Axiom of Choice, some enumeration is possible to be found. Doing this might require having a
    "God Function" to determine if a given candidate is part of the set.

    Just because an infinite set is enumberable, doesn't mean that you CAN create that set in a computation.

    While the looping machines are enumerable, that doesn't mean you can generate a set of all such machines.

    Unless you can actually SHOW how to DECIDE on this, in countable
    unbounded time, you can't claim it.

    THe problem is that while N is enumerable, and N^k is enumberable, 2^N
    is not.

    Your problem is you can't tell when you have simulated a machine long
    enough to say that it is no longer a bounded space machine, and thus "looping" is not deciable, only recognizable.


    with simple deciders as long as the decider is allowed to be
    sufficiently bigger than the program it is deciding on. That has been
    known for a long time, but isn't an exception to the Halting Problem,
    as it doesn't meet the basic requirements.



    And nothing in the theory disagrees with partial halt deciders
    existing, they just can NEVER give an answer to the pathological
    program based on them, as if they give an answer, it will be wrong.

    which means: for any given machine, there is a decider out there
    that must decide correctly on it. so, for any given machine there is
    a method that does correctly decide on it without ever given any
    wrong answers to any other machine (tho it may not give answers at
    times).

    So? Of course there is a decider that gets it right, as one of the
    deciders AlwaysSayHalts or AlwaysSaysNonhalting will be right, we
    just don't know.

    those are not valid partial deciders, why are you still bringing up
    red herrings?

    Because you keep on trying to assert them.

    You claim that for any given machine, there must be a machine to
    correctxl decide it.

    If you don't mean those trivial decider, PROVE your claim. You just
    assert without any proof.

    Note, perhaps you are overlooking the implied requirement, we must KNOW
    that it is correct for that input, otherwise we get to the trivial
    variation of the above, that check if it is THIS machine, and if so one
    says halting and the other says non-halting, and if it is any other
    machine, just keep looping.

    I hope you admit that this would not be a valid answer to you claim.



    And, even if there is an always correct partial decider that gets it
    right, it can't be part of that enumerable set, so we don't know to
    use it.

    and where's the proof i must enumerate all partial deciders in order
    to know it for any given machine???



    as a man, i'm not subject to having my output read and contradicted
    like turing machine deciders are. i'm not subject to having to block
    indefinitely because some input is pathological to me. and because
    some method must exist that can correct decide on any given input...
    i can know that method for any given input, and therefor i can
    decide on any given input.

    As a man, you are not bound by the rules of computations, so aren't
    eligable to be entered as a solution for a computation problem.

    why am i not bound by the rules of computation when performing valid
    computation?

    Because you are not fixed and deterministic. If I can't build a
    computation on you, you are not a computation, as that is one of the fundamentals of a computation.



    So, all you are stating is you are too stupid to understand the
    nature of the problem.

    And, you CAN'T claim to know the mathod for any given input, as there
    are an infinite number of inputs, but only finite knowledge.

    or i can algorithmically determine the correct method upon receiving
    the input ...

    "Get the correct answer" is not an algorithm, and resorting to it just
    shows you don't know what you are talking about.

    You "logic" is apparently still based on lying to yourself that you can
    do it, believing that lie, and then living in your stupidity,.

    If you could algorithmically determine the correct method, then the
    input could have done the same algorithm to determine the method you
    will uses, and then break that result.

    All you are doing is showing you don't understand what algorithmic means.



    I guess you have fallen for the Olcott trap of convinsing yourself
    that you are God.


    this is why i'm really starting to think the ct-thesis is cooked.
    you say i can't do that because turing machines can't do that ...
    but where's the proof that turing machine encompass all of
    computing? why am i limited by the absolute nonsense that is turing
    machines producing pathological input to themselves?

    But the problem goes back to the fact that you are just showing you
    don't understand what a computation is.

    red herrings and now gaslighting

    Nope, as you keep on making statements that are just incorrect about computations.

    Name calling rather than showing the actual error just shows that you
    are out of lies to use.



    Yes, there is no "Proof" that Turing Machines encompass all of
    computing, that is why CT is just a thesis and not a theorem. It has
    shown itself to be correct for everything we have tried so far.

    there we go, u don't have proof yet u keep asserting it's a law
    because a bandwagon has convinced u

    The "Halting Problem" is specifically written about Turing Machines, and thus doesn't depend on CT.

    On the assumption of CT, it can be extended to all computations.

    Since no no method of compuation has been found that allows us to
    COMPUTE beyond what a Turing Machine can do, means as far as we know the

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here u
    are crying about how no superior method has been found as if u'd ever
    even tried to look past the ct-thesis...

    generalization holds.

    Note, what is a "Compuation" is rigidly defined, and isn't done by
    reference to a Turing Machine. This includes that an algorithm that
    performs a computation is based on a series of finitely and deterministically defined operations. This means that given a specific input, that algorithm will ALWAYS produce the same results.




    because turing machines *are* the fundamentals of computation??? but
    again: that's just an assumption. we never proved it, yet here you
    are treating it like unquestionable law.

    that's the flaw bro, one we've been sitting on for almost a century.
    i don't even have a proof to deconstruct, it's literally just an
    assumption, so all i need to do is construct the scenarios where
    something is obviously generally computable, but that computation
    cannot be generally expressed thru a turing machine computation
    input/ ouput specification.

    No, the problem is you don't understand what computating is, and
    thus, just like Zeno, think you have come up with a paradox.

    It SEEMS (to you) that this should be computable, but it turns out it
    isn't.

    heck it even is computable, just not from a fixed point of reference

    That doesn't make sense, since compuations are fixed.

    All you are doing is showing you just don't fundamentally understand
    what compuation is about, perhaps because you (like Olcott) think that 'algorithms' can actually 'think' or 'decide' rather than just 'compute'.


    The halting problem proof can be generalized to ANY computation
    platform, and shown to be true.




    furthermore this doesn't disprove a general algorithm backing the >>>>>> partial deciders, all the general algorithm needs is a "self"
    input which identifies the particular interface it's computing
    for. this general algo for partial deciders will have three
    outputs: HALTS, LOOPS, and PARADOX. when partial deciders receive >>>>>> PARADOX back from their algo run they will then just loop forever >>>>>> to never respond.

    Sure it does.

    The problem is it doesn't get a "self" input, and by its nature. it

    i'm defining the algo, so i say it does

    Sorry, but you don't get to do that.

    And the problem is that even being given a sample "self" input,
    doesn't mean it can detect the other "self" that exists in the
    paradox input.

    again circular logic

    Really? Try to show it wrong.

    Give the definite algorithm that can detect its functional equivalent in
    the paradox input.



    Or that doing so helps it get the right answer.

    What ever answer you algorithm gives, will be wrong.

    There is a right answer, just the opposite of the one the algorithm
    gives.

    That doesn't make the name for that behavior "Paradox", it will still
    be one of "Halting" or "Non-Halting", so the third answer can't be
    correct.


    can't determine if the input is just using a computational
    equivalent of itself that doesn't match its idea of what it looks
    like.

    This FACT just breaks you concept, as you just assume you can
    detect what is proven to be undetectable in full generality.

    using the very paradox i'm trying to solve, so that's begging the
    question. it's really kinda sad how much begging the question is
    going on in the fundamental theory of computing

    No. you algorithm begs the question by assuming you can compute
    something uncomputable.

    i'm showing how it being computable can co-exist with pathological
    input. i realize ur not honest enough of a person to acknowledge what
    my actual argument is, but that's because if u started acknowledge i'm
    right about things ur haunted house of cards goes tumbling down

    No, you assume it is computable, and show that given that false
    assumption you can show that it should be.

    This is why you can't actually present your "base algorithm" because you need to keep changing it to handle the pathological input that gets to
    know your algorithm.



    I guess your problem is you don't understand what an actual ALGORITHM
    is.

    gaslighting again, why u must argue like a child?


    No, it seems it is YOU that has been inhaling too much gas.

    Since you clearly can't show what you claim.




    And, even if it CAN detect that the input is using a copy of
    itself, that doesn't help it as it still can't get the right
    answer, and the

    it's general algo to the partial deciders - all it needs to do is
    either it returns PARADOX in which case the partial decider decides
    to loop(), or maybe we can just extract that functionality into the
    general partial algo itself...

    But the "Halting Behavior" of the input isn't "PARADOX", so that
    can't be a correct answer.

    it's correct from the fixed point of the decision, or we can just block.

    Nope. There is no halting behavior of "Paradox". As all machines either
    halt or not.

    The best you can do is not answer, but that doesn't help you, as you end
    up not answering to a lot of inputs.



    You don't seem to understand that LYING is just LYING and incorrect.

    unfortunately turing machines are not powerful enough of a computation
    paradigm to fit the pigeonhole fallacy ur now making

    They are the most powerful machines we know of, so I guess you are just saying that we don't know how to actually compute.


    this is a problem with assuming the CT-thesis is correct.

    No, your fallacy is assuming it is incorrect, and that you have an
    unknown system that is better.

    You WANT there to be something better, and if you could actually create
    one, the world might beat a path to your door (or try to annalate you
    for breaking too many existing systems).

    It was thought that Quantum Computing might do it, but so far we haven't found it able to do anything unique, just do somethings that were know
    to be computable, just impractical.



    It seems that the result of your description is that ALL your partial
    deciders are going to just loop forever, and thus your claim that one
    of them will answer is just a lie.

    no

    So, which of your deciders is going to answer for the unbounded space machine that has no decernable patterns in its growth?




    pathological input based on your general algorithm effectively uses >>>>> copies of all the algorithms it enumerates, so NONE of them can
    give the right answer.


    yes i'm aware "interfaces" are complete descriptions of a partial >>>>>> decider, and that's what i mean by passing in a self. the partial >>>>>> decider must have a quine that allows it to recognize itself, and >>>>>> it passes this into the general algo.

    Nope, an "Interface" is NOT a complete description of ANY machine,
    so you are just showing you are fundamentally incorrect in your basis. >>>>>
    You can't "run" and "interface", only an actual program that
    implements it.

    sure, but all the partial decider do is construct a self-reference
    using a quine and pass that along with the input to a common backing
    algorithm. all valid partial deciders will need accurate quines in
    order to ascertain where their output feedbacks into affect the
    prediction they are making.

    But that is the problem, since the input uses the machine that is
    enumerating all those deciders, everyone (if they can detect
    themselves) will detect themselves and fail to answer.

    no it's not



    and yes the partial deciders do contain full descriptions of the
    common backing algo, but they still really do just act as an
    interface to that common algo

    they act like an exposed API/interface into the common algo

    and thus the paradox input has the code to act counter to that common
    algorithm, and it can NEVER give the right answer.

    it doesn't counter a general algo that decides across all fixed
    points, the pathological input can only counter a subset of fixed
    points under classical limitations to computing.

    No such thing.

    The problem is that the "pathological input" isn't the only input that causes problems, just a simple to show one.

    There are other problems (like Busy Beaver) that are shown to be fundamentally uncomputable, and thus can be a base for making halting undecidable.






    "but i can loop over all partial deciders to produce a
    paradox" ... uhh no you can't? traditional computing cannot
    iterate over all functionally equivalent machines, so it certainly >>>>>> can't iterate over all almost functionally equivalent machines, so >>>>>> you cannot claim to produce a general paradox for the general algo >>>>>> as such a computation is outside the scope of classical computing >>>>>> limits.

    So, you just admitting you can't use an emumerated list of partial
    deciders to get the answer.

    which is fine, it's just not necessary

    So, you are just admitting that your claim is based on needing to
    compute the uncomputable, in other words, is just a lie.

    i'm saying the total enumeration is not necessary

    But just ASSUMING you can find the right one, IF it exists.

    Maybe you can say that all machines of the given pathological structure
    of the proof can be decided by some other partial decider, that is a
    very different thing than that ALL machines can be decided by some
    partial decider.



    Your enumerable set of partial deciders will just never give an
    answer, and thus you can't say that some partial decider can answer
    for every possible input.



    The pathological program doesn't need to enumerate the deciders, it >>>>> just needs to user what you make your final decider, which can only >>>>> partially enumerate the partial deciders.

    it would in order to break the general algo across all self's. the
    self acts as the fixed point of reference to which the decision is
    made ... and while no fixed point can decide on all input, for any
    given input there is a fixed point of self that can decide on that
    input. and this can be encapsulated into a general algorithm that
    encapsulated a general procedure even if any given fixed point of
    self is not general.

    No it doesn't, it uses the fact that your outer enumerator does it.

    Your logic is based on LIES that you assume you can do it. But,


    therefore a general algo exists, even if any particular fixed point
    of decision making is contradicted.

    Nope. Again, you logic is based on the fallacy of assuming the
    conclusion.

    beating up that straw man, keep up the good work!



    dear god, why is computing theory is such a shit show? cause we've
    been almost blindly following what the forefathers of computing said?

    No, YOU are the shit show because you don't understand that truth
    exsits, but isn't always knowable. There ARE limits to what is
    computable, as problem space grows faster than solution space.

    how many fallacies have i identified in ur arguments? like 6 just in
    this one?

    ur just as garbage as polcott tbh

    Pot calling the kettle black.

    You are still showing that you don't know what compuations actually are,
    as it seems your thesis is that there must be a system more powerful
    than Turing Machines, even if you have no idea how it would work,






    i await to see how you purposefully misunderstand this


    It seems you are the one that doesn't understand.

    Programs can't CHANGE there behavior, they HAVE specific behavior
    that depends on the input, and ALWAYS have that behavior for that
    input.

    The definition of a computation means it can't squirrel away the
    fact that it was once used on this particular input, and needs to
    do something different, which is what is needed to CHANGE their
    behavior.

    i'm aware, i'm not really sure why ur reapting that

    and i await ur further purposeful misunderstanding


    Because you keep on trying to think of ways to get around that
    limitation.

    And algorithm does what the algorithm does, and can't change itself.

    IF what the algorithm does is just give the wrong answer to a
    problem, we can't say that the right answer doesn't exist just
    because a DIFFERENT problem based on a DIFFERENT algorithm gives a
    different answer.

    No machine has "Paradoxical" halting behavior as its specific
    behavior. It might be described as contrary to a given instance of a
    general algorithm, but all that shows was that algorithm was WRONG,
    as it still had specific behavior as to its actual behavior.



    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Thu Jan 15 22:28:00 2026
    From Newsgroup: comp.ai.philosophy

    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here u
    are crying about how no superior method has been found as if u'd ever
    even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of the "model".

    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete ability,
    is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing Complete.

    And, any algorithm that actually USES their capability to detect if they
    have been nested will become incorrect as a decider, as a decider is a
    machine that computes a specific mapping of its input to its output, and
    if that result changes in the submachine, only one of the answers it
    gives (as a stand-alone, or as the sub-machine) can be right, so you
    just show that it gave a wrong answer.

    This is sort of like the problem with a RASP machine architecture, sub-machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by the
    rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 01:08:19 2026
    From Newsgroup: comp.ai.philosophy

    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and here
    u are crying about how no superior method has been found as if u'd
    ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of the "model".

    you honestly could have just said that cause the rest of this is just u repeating urself as if that makes it more correct


    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete ability,
    is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power with that??? clearly they can express anything that TMs can ...


    And, any algorithm that actually USES their capability to detect if they have been nested will become incorrect as a decider, as a decider is a machine that computes a specific mapping of its input to its output, and
    if that result changes in the submachine, only one of the answers it
    gives (as a stand-alone, or as the sub-machine) can be right, so you
    just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with the
    only actual rational way being it's just the "one true way"


    This is sort of like the problem with a RASP machine architecture, sub- machines on such a platform are not necessarily computations, if they
    use the machines capability to pass information not allowed by the rules
    of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 11:46:56 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model, like
    RTMs, u then told me i *can't* do that because muh ct-thesis, and
    here u are crying about how no superior method has been found as if
    u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you get to,



    The model would be the format of the machine, and while your RTM might
    be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have that
    sort of "instructions".



    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a decider
    is a machine that computes a specific mapping of its input to its
    output, and if that result changes in the submachine, only one of the
    answers it gives (as a stand-alone, or as the sub-machine) can be
    right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.



    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by
    the rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing a
    computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes "hidden"
    state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 14:21:04 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found as
    if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed you
    don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you get
    to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction that
    dumps static meta-data + copies tape ... how have they *lost* power
    with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input to
    its output, and if that result changes in the submachine, only one of
    the answers it gives (as a stand-alone, or as the sub-machine) can be
    right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the "one
    true way". seems like u just enjoy shooting urself in the foot, with
    the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    add that to list of the growing fallacies i've pointed out in ur recent arguments, which i'm sure ur not actually tracking, as that would be far
    more honesty than u are capable of putting out.




    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations, if
    they use the machines capability to pass information not allowed by
    the rules of a computation. Your RTM similarly break that property.

    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing
    a computation.

    And without that property, using what the machine could do, becomes a
    pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-dependent computation don't include context-dependent sub-computation.

    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these
    computations aren't generally computable by TMs because TMs lack the
    necessary mechanisms to grant context-awareness.

    unless u can produce some actual proof of some computation that actually breaks in context-dependence, rather than just listing things u assume
    are true, i won't believe u know what ur talking about
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Fri Jan 16 16:58:20 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    *The essence of all Computation generically defined*

    Computation only applies finite string
    transformation rules to finite string inputs.

    Computable functions are Computations that
    always stop running.

    The empty string counts as a string.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng,sci.logic,sci.math on Fri Jan 16 18:21:05 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 5:58 PM, olcott wrote:
    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis, >>>>>> and here u are crying about how no superior method has been found >>>>>> as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of >>>>> the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS. >>>>>
    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if >>>>> they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    *The essence of all Computation generically defined*

    Computation only applies finite string
    transformation rules to finite string inputs.

    Computable functions are Computations that
    always stop running.

    The empty string counts as a string.


    In other words, you admit to not knowing what you are talking about.

    Note, that Computable Functions are a subset of the category Function,
    which are MAPPINGS of input to output (not something a computing machine
    does, that is a meaning in a different field of Computer Science)

    No "algorithm" needed to define a function, and it doesn't run in "steps"

    Computable Functions are Functions for which there exists a proper
    finite algorithm that can compute them for all values.

    It is that finite algorithm that needs to stop, not the "function" as functions don't run at all, they just are.

    You have had the told to you many times, but you don't care that you are
    just repeating your lies. as "facts" aren't important to you, as is
    seems words are flexible in meaning and you just redefine them as you
    want, which means everything you say is actually meaningless, as your
    words no longer have established meaning.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Richard Damon@Richard@Damon-Family.org to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 18:21:10 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.


    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    Try to specify the tuple that your "operation" is.





    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,




    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be irrespective
    of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input, the machine can not be a computation.


    add that to list of the growing fallacies i've pointed out in ur recent arguments, which i'm sure ur not actually tracking, as that would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations,
    if they use the machines capability to pass information not allowed
    by the rules of a computation. Your RTM similarly break that property. >>>>
    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a specific
    input might produce different output, your architecture is NOT doing
    a computation.

    And without that property, using what the machine could do, becomes
    a pretty worthless criteria, as you can't actually talk much about it.

    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if you
    do that, you find that since you are trying to make it so the caller
    can't just define that context, your system is less than turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall computation context-dependent too ... if u dont want a context-dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.


    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these computations aren't generally computable by TMs because TMs lack the necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual computations.


    unless u can produce some actual proof of some computation that actually breaks in context-dependence, rather than just listing things u assume
    are true, i won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT.

    Your context, being not part of the input, can't change the well-defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory where
    the result can depend on things that aren't part of the actual input to
    the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From olcott@polcott333@gmail.com to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 18:35:03 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/2026 4:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis,
    and here u are crying about how no superior method has been found
    as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of
    the "model".

    you honestly could have just said that cause the rest of this is just
    u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true




    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS.

    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the list
    of possible operations with RTMs, my god dude...

    see how fucking unhelpful u are???




    And, any algorithm that actually USES their capability to detect if
    they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,


    Computation only applies finite string
    transformation rules to finite string inputs.

    Deciders transform finite string inputs by
    finite string transformation rules into
    {Accept, Reject} values.
    --
    Copyright 2026 Olcott<br><br>

    My 28 year goal has been to make <br>
    "true on the basis of meaning expressed in language"<br>
    reliably computable.<br><br>

    This required establishing a new foundation<br>
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From dart200@user7160@newsgrouper.org.invalid to comp.theory,comp.ai.philosophy,comp.software-eng on Fri Jan 16 16:43:27 2026
    From Newsgroup: comp.ai.philosophy

    On 1/16/26 3:21 PM, Richard Damon wrote:
    On 1/16/26 5:21 PM, dart200 wrote:
    On 1/16/26 8:46 AM, Richard Damon wrote:
    On 1/16/26 4:08 AM, dart200 wrote:
    On 1/15/26 7:28 PM, Richard Damon wrote:
    On 1/15/26 7:23 AM, dart200 wrote:

    bro stick a giant dildo up ur asshole u hypocritical fuckface...

    when i tried to suggest improvements to the computational model,
    like RTMs, u then told me i *can't* do that because muh ct-thesis, >>>>>> and here u are crying about how no superior method has been found >>>>>> as if u'd ever even tried to look past the ct-thesis...

    No, you didn't suggest improvements to the model, you just showed
    you don't knoww what that means.

    You don't get to change what a "computation" is, that isn't part of >>>>> the "model".

    you honestly could have just said that cause the rest of this is
    just u repeating urself as if that makes it more correct


    But I HAVE said it that simply, and you rejected it as you think you
    get to,

    but repeating urself doesn't make it more true

    And your ignoring it doesn't make it false.





    The model would be the format of the machine, and while your RTM
    might be a type of machine that could be thought of, they don't do
    COMPUTATIONS, as it violates the basic rules of what a compuation IS. >>>>>
    Computations are specific algorithms acting on just the input data.

    A fundamental property needed to reach at least Turing Complete
    ability, is the ability to cascade algorithms.

    Your RTM break that capability, and thus become less than Turing
    Complete.

    i'm sorry, RTMs are literally just TMs with one added instruction
    that dumps static meta-data + copies tape ... how have they *lost*
    power with that??? clearly they can express anything that TMs can ...

    Which means you don't understand how "TM"s work, as they don't have
    that sort of "instructions".

    fuck dude sorry "operation" is the term turing used, i added to the
    list of possible operations with RTMs, my god dude...

    But the only "operations" that a turing machine does is write a
    specified value to the tape, move the tape, and change state.

    yes RTMs are an extension of TMs, please do pay attention



    see how fucking unhelpful u are???

    So, how is your "operation" of the same class as what they do?

    cause it's just as mechanically feasible. mechanical feasibility to self-evident just like with the other rules of turing machines.


    Try to specify the tuple that your "operation" is.

    idk what you mean by this, REFLECT is just another operation like
    HEAD_LEFT, HEAD_RIGHT, or WRITE_<symbol>, the. transition table has a
    list of transition functions:

    cur_state, head_symbol -> action, nxt_state

    and REFLECT goes into the action slot specifying the action that should
    be taking to transition the tape to the next step.






    And, any algorithm that actually USES their capability to detect if >>>>> they have been nested will become incorrect as a decider, as a
    decider is a machine that computes a specific mapping of its input
    to its output, and if that result changes in the submachine, only
    one of the answers it gives (as a stand-alone, or as the sub-
    machine) can be right, so you just show that it gave a wrong answer.

    u have proof that doesn't work yet you keep asserting this is the
    "one true way". seems like u just enjoy shooting urself in the foot,
    with the only actual rational way being it's just the "one true way"

    IT IS DEFINITION. Something you don't seem to understand.

    "Computation" is NOT defined by what some machine does, that is
    algorithms and results. "Computation" is the mapping generated by it,
    which MUST be a specific mapping of input to output.

    no one has defined "computation" well enough to prove that turing
    machines can compute them all,

    that's why it's the ct-thesis dude, not ct-law,

    ur just affirming the consequent without proof.

    No, the DEFINITION of a computation defines what it can be irrespective
    of the actual machinery used to perform it.

    It is, by definition, the algorithm computing of a given mapping.

    Said maps, are BY DEFINITION mappings from the "input" to the "output".

    If the machine can produce two different output from the same input, the machine can not be a computation.

    a context-dependent computation is computing a mapping that isn't
    directly specified by the formal input params. it's computing a mapping of:

    (context, input) -> output

    or more generally just

    context -> output

    since the formal input is just a specific part of the context. and the
    reason we got stuck on the halting problem of a fucking century is
    ignoring that context matters.



    add that to list of the growing fallacies i've pointed out in ur
    recent arguments, which i'm sure ur not actually tracking, as that
    would be far more honesty than u are capable of putting out.

    So, what is the fallacy?

    AFFIRMING THE CONSEQUENT


    It seems you just assume you are allowed to change the definition,
    perhaps because you never bothered to learn it.





    This is sort of like the problem with a RASP machine architecture,
    sub- machines on such a platform are not necessarily computations,
    if they use the machines capability to pass information not allowed >>>>> by the rules of a computation. Your RTM similarly break that property. >>>>>
    Remember, Computations are NOT just what some model of processing
    produce, but specifically is defined based on producing a specific
    mapping of input to output, so if (even as a sub-machine) a
    specific input might produce different output, your architecture is >>>>> NOT doing a computation.

    And without that property, using what the machine could do, becomes >>>>> a pretty worthless criteria, as you can't actually talk much about it. >>>>
    the output is still well-defined and deterministic at runtime,

    Not from the "input" to the piece of algorithm, as it includes
    "hidden" state from outside that input stored elsewhere in the machine.


    context-dependent computations are still computations. the fact TMs
    don't capture them is an indication that the ct-thesis may be false


    Nope. Not unless the "context" is made part of the "input", and if
    you do that, you find that since you are trying to make it so the
    caller can't just define that context, your system is less than
    turing complete.

    Your system break to property of building a computation by the
    concatination of sub-computations.

    ...including a context-dependent sub-computation makes ur overall
    computation context-dependent too ... if u dont want a context-
    dependent computation don't include context-dependent sub-computation.

    Which makes it not a computation.

    PERIOD.

    Fallacy of equivocation.

    i'm not shifting meaning dude. i'm directly claiming it's a distinct
    type of computation that has been ignored by the theory of computing
    thus far

    nice try tho



    but in order to be complete and coherent, certain computations *must*
    have context-awareness and are therefore context-dependent. these
    computations aren't generally computable by TMs because TMs lack the
    necessary mechanisms to grant context-awareness.

    In other words, you require some computations to not be actual
    computations.


    unless u can produce some actual proof of some computation that
    actually breaks in context-dependence, rather than just listing things
    u assume are true, i won't believe u know what ur talking about


    The definition.

    A computation produces the well defined result based on the INPUT.

    context-dependent computation simply expands it's input to include the
    entire computing context, not just the formal parameters. it's still
    well defined and it grants us access to meta computation that is not as expressible in TM computing.

    ct-thesis is cooked dude


    Your context, being not part of the input, can't change the well-defined result.

    Should 1 + 2 become 4 on Thursdays? of it asked of a gingerbread man?

    ur overgeneralizing. just become some computation is context-dependent
    doesn't mean all computation is context-dependent.

    another fallacy.


    All you are doing is saying you disagree with the definition.

    Go ahead, try to define an alternate version of Computation Theory where
    the result can depend on things that aren't part of the actual input to
    the machine, and see what you can show that is useful.

    The problem becomes that you can't really say anything about what you
    will get, since you don't know what the "hidden" factors are.

    ??? i was very clear multiple times over what the "hidden" input was.
    there's nothing random about it, context-dependent computation is just
    as well-defend and deterministic as context-independent computation
    --
    arising us out of the computing dark ages,
    please excuse my pseudo-pyscript,
    ~ nick
    --- Synchronet 3.21b-Linux NewsLink 1.2