• Re: Naughty Python (was Re: naughty Pascal)

    From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to alt.folklore.computers on Fri Jan 9 18:37:09 2026
    From Newsgroup: alt.folklore.computers

    On Fri, 9 Jan 2026 08:09:12 -0700, Peter Flass wrote:

    Never looked at Python, but I'm a huge Rexx fan.

    Rexx seems to work entirely in strings, whereas Python has proper types.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Fri Jan 9 18:38:58 2026
    From Newsgroup: alt.folklore.computers

    On Fri, 9 Jan 2026 08:13:59 -0700, Peter Flass wrote:

    I spent a couple of years writing FORTRAN for the 1130. They called
    it FORTRAN IV, but it was more like III.V, but still better than OS
    FORTRAN at the time. Later I worked on an XDS Sigma system, and
    their FORTRAN was great (as you'd expect with its SDS heritage). At
    the time I liked the language, but I always preferred PL/I.

    When PL/I was still under development, at one point it was going to be
    called “FORTRAN VI”.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 20:47:47 2026
    From Newsgroup: alt.folklore.computers

    On Sat, 10 Jan 2026 19:39:05 GMT, Charlie Gibbs wrote:

    In fact, to work both ways, my code is still full of constructs like
    this:

    #ifdef PROTOTYPE
    int foo(char *bar, BOOL baz)
    #else
    int foo(bar, baz) char *bar; BOOL baz;
    #endif

    What a pain-in-the-bum way of writing things.

    K&R C is gone, people. Let it go.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 20:52:12 2026
    From Newsgroup: alt.folklore.computers

    On Sat, 10 Jan 2026 19:39:05 GMT, Charlie Gibbs wrote:

    I like to be a little more explicit, so I say "#ifdef DELETE_THIS".

    We have version control nowadays. You can actually delete stuff from
    your source, and trust to the version history to keep a record of what
    used to be there.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 00:02:22 2026
    From Newsgroup: alt.folklore.computers

    On Sat, 10 Jan 2026 20:52:12 -0000 (UTC), Lawrence D’Oliveiro wrote:

    On Sat, 10 Jan 2026 19:39:05 GMT, Charlie Gibbs wrote:

    I like to be a little more explicit, so I say "#ifdef DELETE_THIS".

    We have version control nowadays. You can actually delete stuff from
    your source, and trust to the version history to keep a record of what
    used to be there.

    It's getting hard to cover your tracks. We used Subversion and 'svn blame'
    was useful. If you were more polite you could use 'svn praise'.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Nuno Silva@nunojsilva@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 00:27:20 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-10, Lawrence D’Oliveiro wrote:

    On Sat, 10 Jan 2026 19:39:05 GMT, Charlie Gibbs wrote:

    In fact, to work both ways, my code is still full of constructs like
    this:

    #ifdef PROTOTYPE
    int foo(char *bar, BOOL baz)
    #else
    int foo(bar, baz) char *bar; BOOL baz;
    #endif

    What a pain-in-the-bum way of writing things.

    K&R C is gone, people. Let it go.

    People should have the choice of writing that way if they want. And you
    always have the choice of not reading it.


    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism in
    other threads in comp.os.linux.misc.)
    --
    Nuno Silva
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 18:41:15 2026
    From Newsgroup: alt.folklore.computers

    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 20:06:34 2026
    From Newsgroup: alt.folklore.computers

    On 1/10/26 19:41, John Ames wrote:
    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.


    It's never good to foreclose your options. One of my goals for Iron
    Spring PL/I is compatibility with the widest base of code possible. It
    can compile and run IBM PL/I(F) code from 1965. The newer stuff is
    better, but rewriting something that works is a pain.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 22:44:58 2026
    From Newsgroup: alt.folklore.computers

    On 1/10/26 22:06, Peter Flass wrote:
    On 1/10/26 19:41, John Ames wrote:
    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R
    function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.


    It's never good to foreclose your options. One of my goals for Iron
    Spring PL/I is compatibility with the widest base of code possible. It
    can compile and run IBM PL/I(F) code from 1965. The newer stuff is
    better, but rewriting something that works is a pain.

    You got Iron Spring to run properly ?

    Cool.

    And yea, why re-invent the wheel ?

    FORTRAN ... a near ZILLION libs/functions
    writ by serious pros for engineering and
    scientific needs. It's all there, tested
    over and over. Why try to re-do it in 'C'
    or Python ??? Just use FORTRAN.

    My complaint with PL/I is the kind of dated
    60s syntax ... however it COULD do most anything.
    I remember when it was promoted as "The Lang To
    Replace All Others" ....

    Turned out to be Python instead.

    Just downloaded a COBOL IDE ... I'd looked at it
    some years ago and it wasn't bad. Now, because of
    the stuff here, I just *have* to write some COBOL
    again :-)

    Of course the AIs are replacing 'programmers'. It
    will soon become like, well, becoming expert with
    medieval/ancient musical instruments. The pointy-
    haired bosses won't hire you - just TRUST the AI
    to make good, secure, apps they 'describe'.

    Disaster - but nobody will admit it.

    And the pointy-haired bosses ... any probs are
    SOMEONE ELSE'S FAULT ... perfect !

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 03:50:23 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-11, Peter Flass <Peter@Iron-Spring.com> wrote:

    On 1/10/26 19:41, John Ames wrote:

    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R
    function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.

    It's never good to foreclose your options. One of my goals for Iron
    Spring PL/I is compatibility with the widest base of code possible. It
    can compile and run IBM PL/I(F) code from 1965. The newer stuff is
    better, but rewriting something that works is a pain.

    Thanks, Peter. At the time I came up with this scheme, my programs had
    to run on an ancient version of SCO UNIX, as well as Windows and Linux.

    My code tends to be like an ATV: it might not be pretty,
    but it'll go anywhere.
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 01:52:02 2026
    From Newsgroup: alt.folklore.computers

    On 1/10/26 21:41, John Ames wrote:
    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.

    Look ... nobody is going to be 'writing' much
    of ANYTHING within five years. The "AI" will do
    it all - probably led by the pointy-haired bosses
    who can't find their ass even with a spy sat.

    And Win/Lin/IX ... I think they're going to go
    away as well. It'll all just be thin clients
    plugged into the leading AI engines. No more
    operating systems.

    Maybe PIs ... maybe.

    "Programming" is going to be like those who learn
    to play ancient Greek musical instruments ... an
    interesting, but obsolete, old art. "AI" for worse
    or worser, will be IT. Many TRILLIONS of dollars
    invested in this - it is GOING to be The Future
    whether we like it or not.

    Just sayin'

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Sat Jan 10 23:34:32 2026
    From Newsgroup: alt.folklore.computers

    On Sun, 11 Jan 2026 01:52:02 -0500
    c186282 <c186282@nnada.net> wrote:

    Look ... nobody is going to be 'writing' much of ANYTHING within five
    years. The "AI" will do it all - probably led by the pointy-haired
    bosses who can't find their ass even with a spy sat.

    The "AI" bubble isn't going to *last* another five years, full stop.
    Frankly, I'll be shocked if it makes it to '28, if that.

    You're not wrong that the PHBs would *love* to have a Magic Genie
    Friend who answers their poorly-specified and unreasonable demands
    without question, even if it doesn't actually *work* - but the current
    trend of "throw as much raw compute at the same moronic Markov-chain
    solution as possible, and somehow scrounge up more training data than
    THE ENTIRE INTERNET" will collapse under its own weight *long* before
    we ever get there.

    Why? There plain-and-simple *isn't the money for it* - global VC funds
    are draining at a staggering rate and the needle is inching closer and
    closer to *empty,* and major financial backers have finally started to
    go "wait, what if throwing infinite money at a thing that doesn't work
    *isn't* a sound investment strategy...?" And meanwhile, the morons at
    the major "AI" startups are continuing on with their moronic strategy,
    so that they continually need fresh injections of *even more money than
    they asked for last time.* PHBs will double down on a bad bet every
    time, but even *they* only have so many hundreds of billions to throw
    around willy-nilly.

    I'd strongly encourage you to check out Ed Zitron on this point: https://www.wheresyoured.at/the-enshittifinancial-crisis/#blue-owl-in-a-coal-mine
    It's looking a helluva lot like the "pop" has *already started* - one
    of the easiest partners in the funding business is looking at new "AI" datacenter deals (which they're already up to their *eyeballs* in) and
    going "um, no thanks." That's a "Jurassic Park" ripple-in-the-glass
    moment, right there.

    In ten years, this will be the stupidest, most expensive fiasco of a
    footnote in the history of the tech industry, and more than likely the harbinger of a global financial disaster - but it *won't* be the End Of Knowledge Work that the PHBs keep fantasizing it will.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 02:50:59 2026
    From Newsgroup: alt.folklore.computers

    On 1/11/26 02:34, John Ames wrote:
    On Sun, 11 Jan 2026 01:52:02 -0500
    c186282 <c186282@nnada.net> wrote:

    Look ... nobody is going to be 'writing' much of ANYTHING within five
    years. The "AI" will do it all - probably led by the pointy-haired
    bosses who can't find their ass even with a spy sat.

    The "AI" bubble isn't going to *last* another five years, full stop.
    Frankly, I'll be shocked if it makes it to '28, if that.

    Don't confuse the "bubble" - which IS doomed - with
    the tech.

    Expect maybe just TWO big AI providers to survive.
    CHAT will be one of them.

    You're not wrong that the PHBs would *love* to have a Magic Genie
    Friend who answers their poorly-specified and unreasonable demands
    without question, even if it doesn't actually *work* - but the current
    trend of "throw as much raw compute at the same moronic Markov-chain
    solution as possible, and somehow scrounge up more training data than
    THE ENTIRE INTERNET" will collapse under its own weight *long* before
    we ever get there.


    This is what I'm saying ... a LOT of stuff IS going
    to implode once AI starts writing everything.

    But, nobody CARES ... the AI shit is WAY too HOT
    for anyone to THINK.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Harold Stevens@wookie@aspen.localdomain to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 05:55:12 2026
    From Newsgroup: alt.folklore.computers

    In <36F8R.627474$3Sk8.623325@fx46.iad> Charlie Gibbs:

    [Snip...]

    My code tends to be like an ATV: it might not be pretty,
    but it'll go anywhere.

    +1

    Some of the hottest flamewars I've seen in comp.lang.fortran bascially
    involved yelling about deprecated syntax embedded in ancient numerical
    code (EISPACK, BLAS etc) used in production dating back to the 1960's,

    It's very similar to the Wayland mob deciding what's best for users.

    Greybeard quants like me operated on 3 simple maxims:

    1. Anything that works is better than anything that doesn't.
    2. If if ain't broke, don't fix it.
    3. If it breaks, don't ignore it.
    --
    Regards, Weird (Harold Stevens) * IMPORTANT EMAIL INFO FOLLOWS *
    Pardon any bogus email addresses (wookie) in place for spambots.
    Really, it's (wyrd) at att, dotted with net. * DO NOT SPAM IT. *
    I toss (404) GoogleGroup (404 http://twovoyagers.com/improve-usenet.org/).
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Pancho@Pancho.Jones@protonmail.com to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 14:18:10 2026
    From Newsgroup: alt.folklore.computers

    On 1/11/26 11:55, Harold Stevens wrote:


    1. Anything that works is better than anything that doesn't.

    I think there exists a lot of code which makes the world a worse place,
    and hence it would be better if it didn't work.




    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 07:40:37 2026
    From Newsgroup: alt.folklore.computers

    On 1/10/26 20:44, c186282 wrote:
    On 1/10/26 22:06, Peter Flass wrote:
    On 1/10/26 19:41, John Ames wrote:
    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R
    function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.


    It's never good to foreclose your options. One of my goals for Iron
    Spring PL/I is compatibility with the widest base of code possible. It
    can compile and run IBM PL/I(F) code from 1965. The newer stuff is
    better, but rewriting something that works is a pain.

      You got Iron Spring to run properly ?

    FSVO "properly". There are still features I'm working on, but it's "good enough" to compile itself and its run-time library. I have some users
    giving it a good work-out.



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 07:48:14 2026
    From Newsgroup: alt.folklore.computers

    On 1/10/26 23:52, c186282 wrote:
    On 1/10/26 21:41, John Ames wrote:
    On Sun, 11 Jan 2026 00:27:20 +0000
    Nuno Silva <nunojsilva@invalid.invalid> wrote:

    People should have the choice of writing that way if they want. And
    you always have the choice of not reading it.

    (If this sounds too harsh to somebody: I wrote this because of how
    Lawrence repeatedly mentioned "choice" as a way to dismiss criticism
    in other threads in comp.os.linux.misc.)

    While I appreciate the zing, I do have to opine for the record that K&R
    function definitions really are something best left buried with K&R C.
    I'm willing to write in ANSI C for the sake of whatever random weirdo
    wants to try building something of mine on an old proprietary *nix, but
    man I am *not* going any farther back than that.

      Look ... nobody is going to be 'writing' much
      of ANYTHING within five years. The "AI" will do
      it all - probably led by the pointy-haired bosses
      who can't find their ass even with a spy sat.

      And Win/Lin/IX ... I think they're going to go
      away as well. It'll all just be thin clients
      plugged into the leading AI engines. No more
      operating systems.

      Maybe PIs ... maybe.

      "Programming" is going to be like those who learn
      to play ancient Greek musical instruments ... an
      interesting, but obsolete, old art. "AI" for worse
      or worser, will be IT. Many TRILLIONS of dollars
      invested in this - it is GOING to be The Future
      whether we like it or not.

      Just sayin'


    I've seen this before. Remember 4GLs like MarkIV, et.al. that were going
    to make programming so simple that even a PHB could do it? I haven't
    heard much about them lately.

    Any AI-generated code is going to need a long hard look from a real
    programmer before going into production. Often it's harder to pick up someone's (or something's) code and verify and fix it than it is to
    write something from scratch where you understand how every part of it
    works.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From =?UTF-8?Q?St=C3=A9phane?= CARPENTIER@sc@fiat-linux.fr to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 14:55:01 2026
    From Newsgroup: alt.folklore.computers

    Le 11-01-2026, John Ames <commodorejohn@gmail.com> a écrit :
    On Sun, 11 Jan 2026 01:52:02 -0500
    c186282 <c186282@nnada.net> wrote:

    Look ... nobody is going to be 'writing' much of ANYTHING within five
    years. The "AI" will do it all - probably led by the pointy-haired
    bosses who can't find their ass even with a spy sat.

    The "AI" bubble isn't going to *last* another five years, full stop.
    Frankly, I'll be shocked if it makes it to '28, if that.

    The AI bubble which must be considered (I mean: a lot of people has a
    lot of understanding of that term) is that big companies will stop
    to invest always more money on it. That doesn't mean the data centers
    will stop to work, it means that new data centers will stop to be build.
    At least at such increasing speed. It doesn't mean that actual GPU will
    cease to work. It means that big companies will stop to buy so many GPU.

    So, globally, everything that has already be done will stay. And new
    things will improve at a slower pace. The AI is there and will stay
    there for a long time. When the AI bubble will burst, its impact will
    be more on the global economy than on its usage.

    That's why the companies invest so much. Because the one which will
    have the lead at the time of the burst expects to keep that lead for a
    long time. Not because they are stupid and can't predict the obvious.

    The AI is there, like it or not, you have to live with it. The fact that
    you or I like it or not is irrelevant. Like when Platon was criticizing
    writing system because people stopped to learn by heart, the writing
    system was there, stay through the ages and revolutionised things. There
    are some things like farming, writing, electricity that changed
    everything on the human way of life. And the AI is one one them. There
    is no going back. I'm not saying that it's good or bad, I'm saying that
    it's the evolution (not progress because progress is good by definition)
    and one can't do anything but live with it.
    --
    Si vous avez du temps à perdre :
    https://scarpet42.gitlab.io
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Andreas Eder@a_eder_muc@web.de to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 17:00:55 2026
    From Newsgroup: alt.folklore.computers

    On So 11 Jan 2026 at 14:18, Pancho wrote:

    On 1/11/26 11:55, Harold Stevens wrote:

    1. Anything that works is better than anything that doesn't.

    I think there exists a lot of code which makes the world a worse
    place, and hence it would be better if it didn't work.

    It doesn't. It is called Windows. :-)

    'Andreas
    --
    ceterum censeo redmondinem esse delendam
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 20:24:12 2026
    From Newsgroup: alt.folklore.computers

    On Sun, 11 Jan 2026 16:55:32 GMT, Charlie Gibbs wrote:

    "FORTRAN ... remains popular among engineers but despised elsewhere."

    Considering its enduring popularity among the supercomputing crowd,
    I’d say that assessment is a bit out of date.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 20:44:50 2026
    From Newsgroup: alt.folklore.computers

    On Sun, 11 Jan 2026 05:55:12 -0600, Harold Stevens wrote:

    Greybeard quants like me operated on 3 simple maxims:

    1. Anything that works is better than anything that doesn't.
    2. If if ain't broke, don't fix it.
    3. If it breaks, don't ignore it.

    Those go way beyond programming...

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Sun Jan 11 18:00:32 2026
    From Newsgroup: alt.folklore.computers

    On 1/11/26 15:24, Lawrence D’Oliveiro wrote:
    On Sun, 11 Jan 2026 16:55:32 GMT, Charlie Gibbs wrote:

    "FORTRAN ... remains popular among engineers but despised elsewhere."

    Considering its enduring popularity among the supercomputing crowd,
    I’d say that assessment is a bit out of date.

    There's no reason to "despise" FORTRAN anyhow.
    It's a pretty straight-forwards sort of lang
    with an emphasis on numerical calx but in no
    way limited to that sphere.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From antispam@antispam@fricas.org (Waldek Hebisch) to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 02:42:42 2026
    From Newsgroup: alt.folklore.computers

    In alt.folklore.computers Stéphane CARPENTIER <sc@fiat-linux.fr> wrote:
    Le 11-01-2026, John Ames <commodorejohn@gmail.com> a écrit :
    On Sun, 11 Jan 2026 01:52:02 -0500
    c186282 <c186282@nnada.net> wrote:

    Look ... nobody is going to be 'writing' much of ANYTHING within five
    years. The "AI" will do it all - probably led by the pointy-haired
    bosses who can't find their ass even with a spy sat.

    The "AI" bubble isn't going to *last* another five years, full stop.
    Frankly, I'll be shocked if it makes it to '28, if that.

    The AI bubble which must be considered (I mean: a lot of people has a
    lot of understanding of that term) is that big companies will stop
    to invest always more money on it. That doesn't mean the data centers
    will stop to work, it means that new data centers will stop to be build.
    At least at such increasing speed. It doesn't mean that actual GPU will
    cease to work. It means that big companies will stop to buy so many GPU.

    So, globally, everything that has already be done will stay. And new
    things will improve at a slower pace. The AI is there and will stay
    there for a long time. When the AI bubble will burst, its impact will
    be more on the global economy than on its usage.

    That's why the companies invest so much. Because the one which will
    have the lead at the time of the burst expects to keep that lead for a
    long time. Not because they are stupid and can't predict the obvious.

    The AI is there, like it or not, you have to live with it. The fact that
    you or I like it or not is irrelevant. Like when Platon was criticizing writing system because people stopped to learn by heart, the writing
    system was there, stay through the ages and revolutionised things. There
    are some things like farming, writing, electricity that changed
    everything on the human way of life. And the AI is one one them. There
    is no going back. I'm not saying that it's good or bad, I'm saying that
    it's the evolution (not progress because progress is good by definition)
    and one can't do anything but live with it.

    I think you miss the point. First, a lot was promised and there
    is still hope that big thing will be delivered. So, active players
    may win really big, that is why money still flows in.

    Now, concerning burst: AFAIK AI companies use investment money to
    cover cost of operation (whole or in significant part). If there
    is burst, they will have to stop operating literally closing
    their datacenters. Basically only things that generate profits
    or possibly some research by companies that have other sources of
    income and still want to continue research. But that would be
    at much lower scale than currently.

    Now, concerning 'AI is there', there was significant progress in
    some areas like machine translation. "Creative writers" may be
    concerned. But there were attempts to replace a lot of professionals,
    notably programmers. Examples indicate that "AI" can create
    small, trival pieces of code but does not really work for
    bigger and more complex things. To be useful for programming "AI"
    and way it is used must be significantly improved. It is possible
    that slower, gradual improvement will lead to "useful AI".
    But it is also possible that alternative approaches, currently
    underfunded due to AI race, will progress and be used insted
    of "AI".

    Let me say that I am an AI fan and that eventually we will have
    useful AI. But current trend seem to be fundamentally wrong.
    I mean, educated young person got maybe 20 years exposure to
    learning materials. Assuming reading 10 hours daily at 10 cps
    we get about 2.6 GB of learning material. Beside reading there
    is speach, smell, touch and visual innformation. Speach has
    similar speed as reading, smell and touch seem to be lower
    bandwidth. Theoretially visual tract has huge bandwidth, but
    blind people seem to be as inteligent as others, so visual
    part is likely to be noncritcal for general inteligence (as
    oposed to some sepcific visial tasks). So 2.6 GB looks like
    very generous estimate on amount of training material needed
    for human-level inteligence. Yet current AI uses vastly bigger
    training sets giving much lower performance. I have heard
    from neural network specialist that current network training
    algorithms are vastly more efficient than traing going on
    in brain (or rather vastly more efficient than our current
    idea of what is going on inside brain). Yet computational
    cost of AI training seem to approach estimated peak compute
    power of human brain times time needed for learning.

    So, it looks that for general AI we are missing something
    important. For applications ANN apparently struggle with
    tasks that have easy algorithmic solution. So natural way
    forward with applications seem to be via hybrid approaches.
    But AI crowd seem to prefer pure ANN solutions and tries
    to brute-force problems using more compute power.
    --
    Waldek Hebisch
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 11:49:24 2026
    From Newsgroup: alt.folklore.computers

    On 11/01/2026 20:44, rbowman wrote:
    On Sun, 11 Jan 2026 05:55:12 -0600, Harold Stevens wrote:

    Greybeard quants like me operated on 3 simple maxims:

    1. Anything that works is better than anything that doesn't.
    2. If if ain't broke, don't fix it.
    3. If it breaks, don't ignore it.

    Those go way beyond programming...

    Part of the 'philosophy of engineering'.

    Perhaps the most fundamental one, after 'an engineer is someone who can
    do for five bob what any damn fool can do for a quid' is

    'In the construction of mechanisms, complexity should not be multiplied
    beyond that necessary to achieve the defined objective'

    Ockham's Laser...
    --
    Canada is all right really, though not for the whole weekend.

    "Saki"

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 07:50:24 2026
    From Newsgroup: alt.folklore.computers

    On 1/11/26 19:42, Waldek Hebisch wrote:


    Now, concerning 'AI is there', there was significant progress in
    some areas like machine translation. "Creative writers" may be
    concerned. But there were attempts to replace a lot of professionals, notably programmers. Examples indicate that "AI" can create
    small, trival pieces of code but does not really work for
    bigger and more complex things. To be useful for programming "AI"
    and way it is used must be significantly improved.

    I know this is kind of niche, but just for kicks I asked ChatGPT to
    generate a PL/I version of a structure. I have to say I'm surprised that
    it even knew what PL/I is, but I was more surprised that it
    "hallucinated" some non-existent syntax. I thought it had to be me, but
    I researched it and couldn't find it in any compiler or dialect.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 07:52:46 2026
    From Newsgroup: alt.folklore.computers

    On 1/12/26 04:49, The Natural Philosopher wrote:
    On 11/01/2026 20:44, rbowman wrote:
    On Sun, 11 Jan 2026 05:55:12 -0600, Harold Stevens wrote:

    Greybeard quants like me operated on 3 simple maxims:

    1. Anything that works is better than anything that doesn't.
    2. If if ain't broke, don't fix it.
    3. If it breaks, don't ignore it.

    Those go way beyond programming...

    Part of the 'philosophy of engineering'.

    Perhaps the most fundamental one, after 'an engineer is someone who can
    do for five bob what any damn fool can do for a quid' is

    'In the construction of mechanisms, complexity should not be multiplied beyond that necessary to achieve the defined objective'

    Ockham's Laser...


    Now if only computer people could follow this rule. Our rule seems to be
    "why not add just this one more feature"
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 17:03:19 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-12, Peter Flass <Peter@Iron-Spring.com> wrote:

    On 1/12/26 04:49, The Natural Philosopher wrote:

    On 11/01/2026 20:44, rbowman wrote:

    On Sun, 11 Jan 2026 05:55:12 -0600, Harold Stevens wrote:

    Greybeard quants like me operated on 3 simple maxims:

    1. Anything that works is better than anything that doesn't.
    2. If if ain't broke, don't fix it.
    3. If it breaks, don't ignore it.

    Those go way beyond programming...

    Part of the 'philosophy of engineering'.

    Perhaps the most fundamental one, after 'an engineer is someone who can
    do for five bob what any damn fool can do for a quid' is

    'In the construction of mechanisms, complexity should not be multiplied
    beyond that necessary to achieve the defined objective'

    Ockham's Laser...

    :-)

    Now if only computer people could follow this rule. Our rule seems to be "why not add just this one more feature"

    To be fair, I'm sure a lot of computer people are doing this
    under duress, being ordered by the marketroids (who have the
    ear of management) to add yet another shiny thing.

    Perfection is achieved, not when there
    is nothing more to add, but when there
    is nothing left to take away.
    -- Antoine de Saint Exupéry

    Simplify, simplify.
    -- Henry David Thoreau
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From John Ames@commodorejohn@gmail.com to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 09:14:56 2026
    From Newsgroup: alt.folklore.computers

    On Mon, 12 Jan 2026 02:42:42 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wrote:

    Now, concerning burst: AFAIK AI companies use investment money to
    cover cost of operation (whole or in significant part). If there
    is burst, they will have to stop operating literally closing
    their datacenters. Basically only things that generate profits
    or possibly some research by companies that have other sources of
    income and still want to continue research. But that would be
    at much lower scale than currently.

    This is the key point on which the "but the tech will remain!" argument founders. It's not just that research will stop being funded, but that
    it costs so much just to *run* that the only thing keeping the lights
    on is regular infusions of billions in investor cash - and, in OpenAI's
    case, generous subsidies on raw compute from M$.

    Once *that* dries up, they're either going to have to magically become
    orders of magnitude more efficient (which they've showed *absolutely*
    no aptitude for - and China had a good laugh pantsing our tech sector
    on that score last spring, but it's sounding post-fact like their gains
    were, um, not as impressive as they'dve liked everyone to think) or
    price to reflect the *real* cost of operations - and if they can only
    get a fraction of their userbase to pay $200/mo. for a Magical Chatbot
    Friend, good freakin' luck squeezing any more blood from *that* turnip.

    The much likelier scenario is that when the funding stops flowing in
    and the business plan fails to advance from "???" to "profit!" things
    will just...*stop.*

    Obviously, it'll still be *possible* to write this kind of software and
    run it in a slower, more limited context on stock hardware (high-end
    GPUs will be going *real* cheap for a while, but they also cost so much
    to run and require so much infrastructure that it's questionable how cost-effective it'll be to reuse them,) and possibly the tech will find
    its niche in certain fields (as "expert systems" did after the first
    "AI winter,") but the economics of doing "generative AI" at global
    scale are simply *insane,* and the current offerings in their present
    form simply will not survive once taken off VC life support.

    (Again, I'll recommend Ed Zitron on this, who has written *extensively*
    about it over the past couple years, and has done a lot to break down
    how much it actually *costs* - https://www.wheresyoured.at/oai_docs/ )

    But there were attempts to replace a lot of professionals,
    notably programmers. Examples indicate that "AI" can create
    small, trival pieces of code but does not really work for
    bigger and more complex things. To be useful for programming "AI"
    and way it is used must be significantly improved. It is possible
    that slower, gradual improvement will lead to "useful AI".
    But it is also possible that alternative approaches, currently
    underfunded due to AI race, will progress and be used insted
    of "AI".

    [...]

    So, it looks that for general AI we are missing something
    important. For applications ANN apparently struggle with
    tasks that have easy algorithmic solution. So natural way
    forward with applications seem to be via hybrid approaches.
    But AI crowd seem to prefer pure ANN solutions and tries
    to brute-force problems using more compute power.

    Exactly. It's entirely possible that programming twenty years from now
    will look very different from programming today, and there *may* come a
    point where we "crack" using ML techniques for software development in
    a productive, reliable way - but it's *not* gonna come from the current
    crop of clowns, nor through burning infinite money on throwing *MOAR COMPUTE!!!1one* at the same stupid "solution."

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 19:56:45 2026
    From Newsgroup: alt.folklore.computers

    On Mon, 12 Jan 2026 07:52:46 -0700, Peter Flass wrote:

    Now if only computer people could follow this rule. Our rule seems
    to be "why not add just this one more feature"

    We follow Einstein’s rule: “things should be as complicated as they
    need to be, but no more.”
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 19:57:26 2026
    From Newsgroup: alt.folklore.computers

    On Mon, 12 Jan 2026 17:03:19 GMT, Charlie Gibbs wrote:

    To be fair, I'm sure a lot of computer people are doing this under
    duress, being ordered by the marketroids (who have the ear of
    management) to add yet another shiny thing.

    Aren’t you glad the Free Software world isn’t driven by marketroids?
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From scott@scott@slp53.sl.home (Scott Lurndal) to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 20:05:05 2026
    From Newsgroup: alt.folklore.computers

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On Mon, 12 Jan 2026 17:03:19 GMT, Charlie Gibbs wrote:

    To be fair, I'm sure a lot of computer people are doing this under
    duress, being ordered by the marketroids (who have the ear of
    management) to add yet another shiny thing.

    Aren’t you glad the Free Software world isn’t driven by marketroids?

    It's not?

    AI Overview
    https://upload.wikimedia.org/wikipedia/commons/e/e2/Eric_S_Raymond_portrait.jpg

    Eric S. Raymond (ESR), the well-known open-source advocate, began charging
    speaking fees for corporate events in 1999 but waives fees for schools and
    user groups; however, specific current fee amounts aren't publicly listed,
    requiring direct contact with booking agents or his website, though general
    estimates for similar speakers suggest fees could range from thousands to
    tens of thousands depending on the event and his involvement.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Peter Flass@Peter@Iron-Spring.com to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 13:12:02 2026
    From Newsgroup: alt.folklore.computers

    On 1/12/26 10:14, John Ames wrote:
    if they can only
    get a fraction of their userbase to pay $200/mo. for a Magical Chatbot Friend, good freakin' luck squeezing any more blood from *that* turnip.


    Make it a s*xbot, and all the incels will pay to imagine they have a girlfriend.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Niklas Karlsson@nikke.karlsson@gmail.com to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 20:16:39 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-12, Scott Lurndal <scott@slp53.sl.home> wrote:
    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:

    Aren’t you glad the Free Software world isn’t driven by marketroids?

    It's not?

    AI Overview
    https://upload.wikimedia.org/wikipedia/commons/e/e2/Eric_S_Raymond_portrait.jpg

    Eric S. Raymond (ESR), the well-known open-source advocate, began charging
    speaking fees for corporate events in 1999 but waives fees for schools and
    user groups; however, specific current fee amounts aren't publicly listed,
    requiring direct contact with booking agents or his website, though general
    estimates for similar speakers suggest fees could range from thousands to
    tens of thousands depending on the event and his involvement.

    Was ESR ever nearly as influential as he tried to make it look, though?
    OK, so he got speaking fees (unclear how often or how much), but did he
    have much effect on actual FOSS projects?

    Niklas
    --
    This year's Corporate Technology Expo was no different than the ones for years previous. [...] The scene was a three-hour, seemingly unending procession of PowerPoint slides with enough laser pointers to take down an incoming ICBM.
    -- http://thedailywtf.com/Articles/MUMPS-Madness.aspx --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Harold Stevens@wookie@aspen.localdomain to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 14:48:17 2026
    From Newsgroup: alt.folklore.computers

    In <10k3kii$2k2m7$1@dont-email.me> Peter Flass:
    On 1/12/26 10:14, John Ames wrote:
    if they can only
    get a fraction of their userbase to pay $200/mo. for a Magical Chatbot
    Friend, good freakin' luck squeezing any more blood from *that* turnip.


    Make it a s*xbot, and all the incels will pay to imagine they have a girlfriend.

    LOL!!! Beats hell outta reuseable inflatable plastic dolls, hidden
    in the basement in grandad's empty Vietnam ammo box.

    "No self-respecting woman wants to date you? SIGN UP FOR AIGRLLL!"

    IIRC, it was pay-to-view online porn sites in the late 1990's that
    drove initial development of secure consumer online payment apps.
    --
    Regards, Weird (Harold Stevens) * IMPORTANT EMAIL INFO FOLLOWS *
    Pardon any bogus email addresses (wookie) in place for spambots.
    Really, it's (wyrd) at att, dotted with net. * DO NOT SPAM IT. *
    I toss (404) GoogleGroup (404 http://twovoyagers.com/improve-usenet.org/).
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Charlie Gibbs@cgibbs@kltpzyxm.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 00:24:02 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-12, Lawrence D’Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 12 Jan 2026 07:52:46 -0700, Peter Flass wrote:

    Now if only computer people could follow this rule. Our rule seems
    to be "why not add just this one more feature"

    We follow Einstein’s rule: “things should be as complicated as they
    need to be, but no more.”

    s/complicated/simple/
    --
    /~\ Charlie Gibbs | Growth for the sake of
    \ / <cgibbs@kltpzyxm.invalid> | growth is the ideology
    X I'm really at ac.dekanfrus | of the cancer cell.
    / \ if you read it the right way. | -- Edward Abbey
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Lawrence =?iso-8859-13?q?D=FFOliveiro?=@ldo@nz.invalid to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 00:36:47 2026
    From Newsgroup: alt.folklore.computers

    On Tue, 13 Jan 2026 00:24:02 GMT, Charlie Gibbs wrote:

    On 2026-01-12, Lawrence D’Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 12 Jan 2026 07:52:46 -0700, Peter Flass wrote:

    Now if only computer people could follow this rule. Our rule seems
    to be "why not add just this one more feature"

    We follow Einstein’s rule: “things should be as complicated as they
    need to be, but no more.”

    s/complicated/simple/

    Really?? The man who brought Riemann tensors into physics?
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From c186282@c186282@nnada.net to comp.os.linux.misc,alt.folklore.computers on Mon Jan 12 23:04:35 2026
    From Newsgroup: alt.folklore.computers

    On 1/12/26 19:36, Lawrence D’Oliveiro wrote:
    On Tue, 13 Jan 2026 00:24:02 GMT, Charlie Gibbs wrote:

    On 2026-01-12, Lawrence D’Oliveiro <ldo@nz.invalid> wrote:

    On Mon, 12 Jan 2026 07:52:46 -0700, Peter Flass wrote:

    Now if only computer people could follow this rule. Our rule seems
    to be "why not add just this one more feature"

    We follow Einstein’s rule: “things should be as complicated as they
    need to be, but no more.”

    s/complicated/simple/

    Really?? The man who brought Riemann tensors into physics?

    Well, really, it HAD to be that complicated just
    to be 'clear' ........

    Even worse now, QM and 'strings' and such.

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 04:31:00 2026
    From Newsgroup: alt.folklore.computers

    On Mon, 12 Jan 2026 19:57:26 -0000 (UTC), Lawrence D’Oliveiro wrote:

    On Mon, 12 Jan 2026 17:03:19 GMT, Charlie Gibbs wrote:

    To be fair, I'm sure a lot of computer people are doing this under
    duress, being ordered by the marketroids (who have the ear of
    management) to add yet another shiny thing.

    Aren’t you glad the Free Software world isn’t driven by marketroids?

    Are you sure of that? They might not have an office and title.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 04:46:25 2026
    From Newsgroup: alt.folklore.computers

    On Mon, 12 Jan 2026 20:05:05 GMT, Scott Lurndal wrote:

    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:
    On Mon, 12 Jan 2026 17:03:19 GMT, Charlie Gibbs wrote:

    To be fair, I'm sure a lot of computer people are doing this under
    duress, being ordered by the marketroids (who have the ear of
    management) to add yet another shiny thing.

    Aren’t you glad the Free Software world isn’t driven by marketroids?

    It's not?

    AI Overview
    https://upload.wikimedia.org/wikipedia/commons/e/e2/
    Eric_S_Raymond_portrait.jpg

    Eric S. Raymond (ESR), the well-known open-source advocate, began
    charging speaking fees for corporate events in 1999 but waives fees
    for schools and user groups; however, specific current fee amounts
    aren't publicly listed, requiring direct contact with booking agents
    or his website, though general estimates for similar speakers suggest
    fees could range from thousands to tens of thousands depending on the
    event and his involvement.


    He's not supposed to make a living. That is 26 year old news anyway. Even
    the photo is from 2004. I don't see anything about speaking on his website although there is some interesting material.

    http://www.catb.org/~esr/guns/

    He has pissed a lot of people off, including the Stalin, er, Stallman worshipers.



    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 05:01:17 2026
    From Newsgroup: alt.folklore.computers

    On 12 Jan 2026 20:16:39 GMT, Niklas Karlsson wrote:

    On 2026-01-12, Scott Lurndal <scott@slp53.sl.home> wrote:
    Lawrence =?iso-8859-13?q?D=FFOliveiro?= <ldo@nz.invalid> writes:

    Aren’t you glad the Free Software world isn’t driven by marketroids?

    It's not?

    AI Overview
    https://upload.wikimedia.org/wikipedia/commons/e/e2/ Eric_S_Raymond_portrait.jpg

    Eric S. Raymond (ESR), the well-known open-source advocate, began
    charging speaking fees for corporate events in 1999 but waives fees
    for schools and user groups; however, specific current fee amounts
    aren't publicly listed, requiring direct contact with booking agents
    or his website, though general estimates for similar speakers
    suggest fees could range from thousands to tens of thousands
    depending on the event and his involvement.

    Was ESR ever nearly as influential as he tried to make it look, though?
    OK, so he got speaking fees (unclear how often or how much), but did he
    have much effect on actual FOSS projects?

    It's hard to say how much effect he himself had but if you go back to 'The Cathedral and the Bazaar' the bazaar is doing very well these days.

    https://dotnet.microsoft.com/en-us/platform/open-source

    Do you think Microsoft would get involved with a GPL project? How many
    other FOSS projects use the MIT, Apache, Zero Clause BSD, or other
    permissive licenses?

    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Niklas Karlsson@nikke.karlsson@gmail.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 06:31:43 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-13, rbowman <bowman@montana.com> wrote:

    Do you think Microsoft would get involved with a GPL project?

    They already have, including the Linux kernel. If one is to believe
    Wikipedia, their first real run-in with the GPL was in 1998.

    https://en.wikipedia.org/wiki/Microsoft_and_open_source#1990s

    How many other FOSS projects use the MIT, Apache, Zero Clause BSD, or
    other permissive licenses?

    I don't know offhand, but I've always been under the impression the
    licenses you mentioned are all relatively widespread.

    Niklas
    --
    "Abendessen" ("evening meal")
    ITYM "meal eaten whilst trying to figure out why the damn program keeps crashing"
    -- Kai Henningsen and Cael in asr
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 08:09:31 2026
    From Newsgroup: alt.folklore.computers

    On 13 Jan 2026 06:31:43 GMT, Niklas Karlsson wrote:


    How many other FOSS projects use the MIT, Apache, Zero Clause BSD, or
    other permissive licenses?

    I don't know offhand, but I've always been under the impression the
    licenses you mentioned are all relatively widespread.

    Precisely. Raymond's argument was restrictive licenses would deter FOSS development.
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From Niklas Karlsson@nikke.karlsson@gmail.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 14:21:45 2026
    From Newsgroup: alt.folklore.computers

    On 2026-01-13, rbowman <bowman@montana.com> wrote:
    On 13 Jan 2026 06:31:43 GMT, Niklas Karlsson wrote:

    How many other FOSS projects use the MIT, Apache, Zero Clause BSD, or
    other permissive licenses?

    I don't know offhand, but I've always been under the impression the
    licenses you mentioned are all relatively widespread.

    Precisely. Raymond's argument was restrictive licenses would deter FOSS development.

    Fair enough, but to go back to where this subthread came from, does that
    make him a marketroid? Or just an advocate, maybe agitator if you're
    feeling dramatic?

    "Marketroid" makes me think of something rather more pernicious than
    just copyleft vs. copycenter (to reference the Jargon File, speaking of
    ESR).

    Niklas
    --
    In Python, I'd model the CSV row as a class with a method that deserialised an array-of-string, with appropriate validation and coercion to a known static type. And then I'd get half way through before going "fuck it" and finishing it off in Perl. -- Peter Corlett
    --- Synchronet 3.21b-Linux NewsLink 1.2
  • From rbowman@bowman@montana.com to comp.os.linux.misc,alt.folklore.computers on Tue Jan 13 17:54:52 2026
    From Newsgroup: alt.folklore.computers

    On 13 Jan 2026 14:21:45 GMT, Niklas Karlsson wrote:

    On 2026-01-13, rbowman <bowman@montana.com> wrote:
    On 13 Jan 2026 06:31:43 GMT, Niklas Karlsson wrote:

    How many other FOSS projects use the MIT, Apache, Zero Clause BSD, or
    other permissive licenses?

    I don't know offhand, but I've always been under the impression the
    licenses you mentioned are all relatively widespread.

    Precisely. Raymond's argument was restrictive licenses would deter FOSS
    development.

    Fair enough, but to go back to where this subthread came from, does that
    make him a marketroid? Or just an advocate, maybe agitator if you're
    feeling dramatic?

    "Marketroid" makes me think of something rather more pernicious than
    just copyleft vs. copycenter (to reference the Jargon File, speaking of
    ESR).

    Niklas

    I see no reason to call him a 'marketroid', whatever that might be.
    --- Synchronet 3.21b-Linux NewsLink 1.2