AI content coming to ex Kalmbach titles? Firecrown CEO all in on AI content

Looks like firecrown ceo Craig Fuller is all in on AI content: How We Use AI

1 Like

Before responding on this thread, I would strongly suggest people actually READ the linked article, step away from the keyboard and THINK about the article, READ the article again, apply some CRITICAL THINKING, then write a response.

The way the thread title is written is a good example of what the internet now calls ā€œrage baitā€. Post on a hot trigger topic and people with respond with ideology, not reasoned arguments.

3 Likes

sounds like you work at firecrown.

Sounds more like you work at a competitor

2 Likes

This doesn’t sound like ā€œall inā€. While I do not approve of ā€˜AI’, I’m glad that, for the moment, it hasn’t been used any further. If they do get to writing articles with it, then I am definitely going to get annoyed with the whole matter.

Well said Bob. The mention of AI can scare some. I recall long ago soda despensing machines. AI? One dropped coins into the machine, it in turn allowed a soda to be retreived through slots that opened. Was a person inspecting the coin, opening slots like a person operated toll booth. NO. it was the beginning of artificial intelligence. True, there comes a time when AI gets out of hand, by going too far. As the saying went, when computers first came about, Garbage in Garbage out.

For this discussion, toward the end of of the Firecrown cited source, this quote says it all: ā€œRest assured, we will never use AI to write experiential or product reviews, though it’s invaluable for tracking and reacting to unfolding news. This aligns with practices at global leaders like Fox News, The Wall Street Journal, and Bloombergā€. (We = Firecrown) This sounds like they are using AI, UP TO A POINT, and NOT dangerously beyond. regards all mike endmrw0811251049

Mike, keep in mind that all the current ā€˜craze’ AI is an enormous fake, based on LLM models that are comically over-reliant on ā€˜training’. The only way they ā€˜learn’ anything new – by instruction, by experience, by correction – is by being completely retrained at similar great cost and energy expense… and if anything changes in the training weights, the result has no comprehension of its previous state.

Nearly 50 years ago, I was developing a system called Clara Velentine, which was supposed to work like a good executive secretary ā€˜in the background’ to help you determine how best to navigate the coming world of ā€˜cyberspace’ and make best coherent use of what you had to learn. This was shortly after the Apple ā€˜Manhattan Project’ to design a computer for which 15 minutes of ā€˜nerd stuff’ (learning to work a mouse, get the haptic sense of pull-down menus, remembering cryptic cmd-letter codes, etc.) would be needed to run any program that would EVER BE WRITTEN for the machine… crApple subsequently veered far from that vision, but it occurred to me then, and I think it is still true now, that adding convergent AI logic to a reasonably customizable set of interface conventions gets you remarkably close to a kind of pervasive computing that’s enabling rather than causing the sort of ā€˜dependence’ calculators do for learning basic math.

Most of the stuff in that era was ā€˜faked’ in a different way; I called it ā€˜AI/ES’, meaning it was a convergence of natural-language and expert-system operations. The ā€˜fake’ part came from reading Max Weber; you didn’t need an ā€˜artificial consciousness’ with evolved free will – you just program in fundamental Judeo-Christian values and the result has all the hallmarks of a conscience. (You run multiple instances with different rules for mission-critical conscience like train dispatching or nuclear deterrence, but that’s a whole 'nother story…)

The current AI lacks this, and more troublesomely it lacks the least shred of trustability, to the point you now see printed notices that any AI content may contain errors and hallucinations, which build off each other as they percolate through ā€˜distributed consciousness’. I have been following the adoption of AI at the Cleveland Plain Dealer with some interest, as their conclusions are coming to resemble mine: You can use AI for rapid and broad fact-checking… as long as an educated human mind that already knows the facts backstops everything that is ā€˜fact-checked’. And so many sufficiently-educated minds are already programmed to value ideology over fact when a narrative is expedient…

We already have one fellow on the Forums who does not even know correct English grammar but claims Google Gemini is a perfect and infallible source of complex revealed wisdom. As that ā€˜wisdom’ increasingly informs those who make the narratives… we come appallingly close to something I thought could never happen here: an articulate clone of Newspeak.

1 Like

Are they the good kind of robots like Robocop and Arnold Schwarzenegger from T2 or the bad kind like Roy Batty from Blade Runner?

For now. I’m sure we’ll see ai written articles soon enough.

Or ai forum posters. Whatever happened to Euclid, btw?

2 Likes

ā€œCommitting to never using AI sets a dangerous precedent.ā€ For who? Interesting AI choice of verbiage.Probably would set a dangerous precedent as you’d be one of the few folks might actually READ that wasn’t AI.
ā€œOur official policy is that any writer who publishes content, whether AI-assisted or not, is responsible for every single word.ā€
Right.
ā€œRest assured, we will never use AI to write experiential or product reviews, though it’s invaluable for tracking and reacting to unfolding news. "
No need. Lots of reviews have been 'radioedā€ in the past, citing wonderful operational features of say, a locomotive, specific items that could be plugged in…all cut-and-paste from manufacturer’s release. Not onE of those items claimed existed at time of writing! When called on their (very early) form of Artificial Insertion…they didn’t care.
I wrote reviews. For years. Current, voltage, drawbar pull, inspection, and full teardown, THEN write. Try that, AI.
TOC

1 Like

Here comes the horse’s mouth to tell you we don’t use AI or LLM to write articles for Model Railroader. It’s hard enough to find a human that both knows model railroading and can write. I’ve experimented with using AI and LLM to do simple tasks, such as scan an article and fix spelling, grammar and punctuation errors, which saves me a bit of time, but that’s as far as it’s likely to go.
The amount of computing power it would take to replicate a Pelle SĆøeborg or Gerry Leone is far beyond what any publisher will be able to afford for quite some time. You can rest easy that there is an actual human behind every story you read in Model Railroader.
Eric

7 Likes

Thank you.

The most common image created in folks today who fear the overuse of AI is Skynet from the Terminator franchise. A few will remember HAL from 2001 or M5 from Star Trek’s ā€œThe Ultimate Computer.ā€. For another interesting image, I refer you to Colossus from The Forbin Project.

Some say that these visions are simple fear of the unknown and inevitable progress.

I tend to go the other way and say that we don’t fear the unknown and ā€œinevitable progressā€ enough.

I have been accused of being a Luddite (by folks who really don’t know what that terms means), but, in enough cases to induce caution, I think ol’ Ned wasn’t wrong.

YMMV. Void in some states. Do not fold, spindle, or mutilate. Dramatic recreation using professional actors. Or maybe recreated using AI? Would you know? Do you care?

:rofl: :maniacal laugh: :rofl:

I’m piggybacking off of what Eric posted–AI or LLM only ā€œknowā€ what has been given to them. This is not helpful for lesser-known areas of toy trains. For instance, I tested editing an article about toy trains in Argentina. It was not very useful because there’s little known about the subject matter. (Here’s the article in question, edited by me: Toy trains in Argentina - Trains)

2 Likes

Well, I’m less so ā€œafraidā€ of a rogue AI and moreso afraid of the potential ubiquity of junk that AI frequently creates. I’ve experimented with it, and I certainly don’t think that it is going to displace all the creative jobs. Why? Because, to put it simply, AI creates garbage. A simple way to understand how many of the AIs work is to think of it as a search engine combined with an averaging equation. It searches some kind of database or library (many search the Internet), takes what it finds that matches, and then ā€œaveragesā€ the data–in other words, it more or less tries to put it together in a way that fits the grammatical rules it is provided with. While this is definitely a very advanced piece of technology, it is not artificial intelligence. Nor does it work well. First of all, it gets things wrong. Often.
For example, here’s what Google Gemini had to say about BNSF 1988:
ā€œBased on the available information, there is no BNSF locomotive numbered 1988. It’s likely you are thinking of a different railroad’s locomotive.
The number 1988 is associated with a specific ā€œHeritage Fleetā€ locomotive from Union Pacific, which is a different railroad. Union Pacific’s locomotive #1988 is a commemorative unit honoring the Missouri-Kansas-Texas Railroad (often called ā€œThe Katyā€), which was acquired by Union Pacific in 1988.ā€

Oh really? And what about this locomotive?


Finally, after a while, I managed to get it realize that BNSF 1988 did actually exist. It then declared that it was originally BN 6386. WRONG! It then denoted that it was rebuilt and no longer carried the number 1988. ALSO WRONG!
I think that you get the idea.
Furthermore, there are plenty of people saying that AI could create Shakespeare-level literature. This is not practical. Excepting the fact that AI is generally unreliable, if it searches the Internet to find basis for its literature, then it will find not only Shakespeare but also random people screaming nonsense on blogs, social media, and forums. And there is a lot more nonsense. As such, what AI would write would mostly consist of nonsense.
End of speech.

2 Likes

Is there reason to worry? Not when we’re talking about railroads or researching a subject.

We do need to worry that governments are increasingly relying on AI to operate. Are government policies going to be influenced by incorrect AI information?

What about defense? Is the military going to rely on AI to carry out certain operations? Will AI determine if or when to fire weapons and at what targets? On and on with this.

I think we have very good reasons for worrying about AI and how far we will allow it to go.

4 Likes

Agree with York1.

But concerning AI creating garbage: too few people know–or care–about the differences between good work and garbage. Too many are perfectly willing to accept garbage. (It would be easy for me to ascend my soapbox concerning education, but I will restrain myself.)

Photo of me taken in 2005 at the Proud Studio in Chonburi Thailand:

A. I. Generated animation created in 2025 using a sample program:

2 Likes

I have to agree. Too many people fit this category. And, because of that, too much garbage is created.

Yep and that movie came about because James Cameron couldn’t get his answering machine to work so he got mad and cussed at the popcorn maker. Yes, before any of the fanboys come chiming in I know all about it being from some dream he had but my version is funnier so I’m taking some artistic license. Why? because I feel like it that’s why.

1 Like