Looks like firecrown ceo Craig Fuller is all in on AI content: How We Use AI
Before responding on this thread, I would strongly suggest people actually READ the linked article, step away from the keyboard and THINK about the article, READ the article again, apply some CRITICAL THINKING, then write a response.
The way the thread title is written is a good example of what the internet now calls ārage baitā. Post on a hot trigger topic and people with respond with ideology, not reasoned arguments.
sounds like you work at firecrown.
Sounds more like you work at a competitor
This doesnāt sound like āall inā. While I do not approve of āAIā, Iām glad that, for the moment, it hasnāt been used any further. If they do get to writing articles with it, then I am definitely going to get annoyed with the whole matter.
Well said Bob. The mention of AI can scare some. I recall long ago soda despensing machines. AI? One dropped coins into the machine, it in turn allowed a soda to be retreived through slots that opened. Was a person inspecting the coin, opening slots like a person operated toll booth. NO. it was the beginning of artificial intelligence. True, there comes a time when AI gets out of hand, by going too far. As the saying went, when computers first came about, Garbage in Garbage out.
For this discussion, toward the end of of the Firecrown cited source, this quote says it all: āRest assured, we will never use AI to write experiential or product reviews, though itās invaluable for tracking and reacting to unfolding news. This aligns with practices at global leaders like Fox News, The Wall Street Journal, and Bloombergā. (We = Firecrown) This sounds like they are using AI, UP TO A POINT, and NOT dangerously beyond. regards all mike endmrw0811251049
Mike, keep in mind that all the current ācrazeā AI is an enormous fake, based on LLM models that are comically over-reliant on ātrainingā. The only way they ālearnā anything new ā by instruction, by experience, by correction ā is by being completely retrained at similar great cost and energy expense⦠and if anything changes in the training weights, the result has no comprehension of its previous state.
Nearly 50 years ago, I was developing a system called Clara Velentine, which was supposed to work like a good executive secretary āin the backgroundā to help you determine how best to navigate the coming world of ācyberspaceā and make best coherent use of what you had to learn. This was shortly after the Apple āManhattan Projectā to design a computer for which 15 minutes of ānerd stuffā (learning to work a mouse, get the haptic sense of pull-down menus, remembering cryptic cmd-letter codes, etc.) would be needed to run any program that would EVER BE WRITTEN for the machine⦠crApple subsequently veered far from that vision, but it occurred to me then, and I think it is still true now, that adding convergent AI logic to a reasonably customizable set of interface conventions gets you remarkably close to a kind of pervasive computing thatās enabling rather than causing the sort of ādependenceā calculators do for learning basic math.
Most of the stuff in that era was āfakedā in a different way; I called it āAI/ESā, meaning it was a convergence of natural-language and expert-system operations. The āfakeā part came from reading Max Weber; you didnāt need an āartificial consciousnessā with evolved free will ā you just program in fundamental Judeo-Christian values and the result has all the hallmarks of a conscience. (You run multiple instances with different rules for mission-critical conscience like train dispatching or nuclear deterrence, but thatās a whole 'nother storyā¦)
The current AI lacks this, and more troublesomely it lacks the least shred of trustability, to the point you now see printed notices that any AI content may contain errors and hallucinations, which build off each other as they percolate through ādistributed consciousnessā. I have been following the adoption of AI at the Cleveland Plain Dealer with some interest, as their conclusions are coming to resemble mine: You can use AI for rapid and broad fact-checking⦠as long as an educated human mind that already knows the facts backstops everything that is āfact-checkedā. And so many sufficiently-educated minds are already programmed to value ideology over fact when a narrative is expedientā¦
We already have one fellow on the Forums who does not even know correct English grammar but claims Google Gemini is a perfect and infallible source of complex revealed wisdom. As that āwisdomā increasingly informs those who make the narratives⦠we come appallingly close to something I thought could never happen here: an articulate clone of Newspeak.
Are they the good kind of robots like Robocop and Arnold Schwarzenegger from T2 or the bad kind like Roy Batty from Blade Runner?
For now. Iām sure weāll see ai written articles soon enough.
Or ai forum posters. Whatever happened to Euclid, btw?
āCommitting to never using AI sets a dangerous precedent.ā For who? Interesting AI choice of verbiage.Probably would set a dangerous precedent as youād be one of the few folks might actually READ that wasnāt AI.
āOur official policy is that any writer who publishes content, whether AI-assisted or not, is responsible for every single word.ā
Right.
āRest assured, we will never use AI to write experiential or product reviews, though itās invaluable for tracking and reacting to unfolding news. "
No need. Lots of reviews have been 'radioedā in the past, citing wonderful operational features of say, a locomotive, specific items that could be plugged inā¦all cut-and-paste from manufacturerās release. Not onE of those items claimed existed at time of writing! When called on their (very early) form of Artificial Insertionā¦they didnāt care.
I wrote reviews. For years. Current, voltage, drawbar pull, inspection, and full teardown, THEN write. Try that, AI.
TOC
Here comes the horseās mouth to tell you we donāt use AI or LLM to write articles for Model Railroader. Itās hard enough to find a human that both knows model railroading and can write. Iāve experimented with using AI and LLM to do simple tasks, such as scan an article and fix spelling, grammar and punctuation errors, which saves me a bit of time, but thatās as far as itās likely to go.
The amount of computing power it would take to replicate a Pelle SĆøeborg or Gerry Leone is far beyond what any publisher will be able to afford for quite some time. You can rest easy that there is an actual human behind every story you read in Model Railroader.
Eric
Thank you.
The most common image created in folks today who fear the overuse of AI is Skynet from the Terminator franchise. A few will remember HAL from 2001 or M5 from Star Trekās āThe Ultimate Computer.ā. For another interesting image, I refer you to Colossus from The Forbin Project.
Some say that these visions are simple fear of the unknown and inevitable progress.
I tend to go the other way and say that we donāt fear the unknown and āinevitable progressā enough.
I have been accused of being a Luddite (by folks who really donāt know what that terms means), but, in enough cases to induce caution, I think olā Ned wasnāt wrong.
YMMV. Void in some states. Do not fold, spindle, or mutilate. Dramatic recreation using professional actors. Or maybe recreated using AI? Would you know? Do you care?
:maniacal laugh:
Iām piggybacking off of what Eric postedāAI or LLM only āknowā what has been given to them. This is not helpful for lesser-known areas of toy trains. For instance, I tested editing an article about toy trains in Argentina. It was not very useful because thereās little known about the subject matter. (Hereās the article in question, edited by me: Toy trains in Argentina - Trains)
Well, Iām less so āafraidā of a rogue AI and moreso afraid of the potential ubiquity of junk that AI frequently creates. Iāve experimented with it, and I certainly donāt think that it is going to displace all the creative jobs. Why? Because, to put it simply, AI creates garbage. A simple way to understand how many of the AIs work is to think of it as a search engine combined with an averaging equation. It searches some kind of database or library (many search the Internet), takes what it finds that matches, and then āaveragesā the dataāin other words, it more or less tries to put it together in a way that fits the grammatical rules it is provided with. While this is definitely a very advanced piece of technology, it is not artificial intelligence. Nor does it work well. First of all, it gets things wrong. Often.
For example, hereās what Google Gemini had to say about BNSF 1988:
āBased on the available information, there is no BNSF locomotive numbered 1988. Itās likely you are thinking of a different railroadās locomotive.
The number 1988 is associated with a specific āHeritage Fleetā locomotive from Union Pacific, which is a different railroad. Union Pacificās locomotive #1988 is a commemorative unit honoring the Missouri-Kansas-Texas Railroad (often called āThe Katyā), which was acquired by Union Pacific in 1988.ā
Oh really? And what about this locomotive?
Finally, after a while, I managed to get it realize that BNSF 1988 did actually exist. It then declared that it was originally BN 6386. WRONG! It then denoted that it was rebuilt and no longer carried the number 1988. ALSO WRONG!
I think that you get the idea.
Furthermore, there are plenty of people saying that AI could create Shakespeare-level literature. This is not practical. Excepting the fact that AI is generally unreliable, if it searches the Internet to find basis for its literature, then it will find not only Shakespeare but also random people screaming nonsense on blogs, social media, and forums. And there is a lot more nonsense. As such, what AI would write would mostly consist of nonsense.
End of speech.
Is there reason to worry? Not when weāre talking about railroads or researching a subject.
We do need to worry that governments are increasingly relying on AI to operate. Are government policies going to be influenced by incorrect AI information?
What about defense? Is the military going to rely on AI to carry out certain operations? Will AI determine if or when to fire weapons and at what targets? On and on with this.
I think we have very good reasons for worrying about AI and how far we will allow it to go.
Agree with York1.
But concerning AI creating garbage: too few people knowāor careāabout the differences between good work and garbage. Too many are perfectly willing to accept garbage. (It would be easy for me to ascend my soapbox concerning education, but I will restrain myself.)
Photo of me taken in 2005 at the Proud Studio in Chonburi Thailand:
A. I. Generated animation created in 2025 using a sample program:
I have to agree. Too many people fit this category. And, because of that, too much garbage is created.
Yep and that movie came about because James Cameron couldnāt get his answering machine to work so he got mad and cussed at the popcorn maker. Yes, before any of the fanboys come chiming in I know all about it being from some dream he had but my version is funnier so Iām taking some artistic license. Why? because I feel like it thatās why.