Modern diesel efficiency compared to first generation..

The presentation has everything to do with changes in the overall fuel efficiency of locomotives. I grant you, however, that it doesn’t contain specific data on changes in the fuel efficiency of locomotive diesel prime movers.

“Overall fuel efficiency” meaning ton-miles per gallon of fuel. The report says nothing about horsepower-hours per gallon of fuel – they couldn’t measure that, of course. Apparently the average ton-mile is easier to produce now, because the locomotive’s SFC (drawbar horsepower divided by fuel burn per hour) hasn’t dropped by “close to 50%”.

Work performed by unit of fuel is the only way for railroads to effectively measure economic fuel consumption.

Railroads don’t measure how much fuel it takes to run light power down to the corner store for a gallon of milk and a pack of gum.

Before you get too far carried away with semantics, the thread is about locomotive prime-mover efficiency, not horsepower per ton mile, which implicitly includes train speed. If you were to extend Iden’s graph past 2015 I don’t doubt you would see “improvements” in fuel consumption due to the new drag era of slower, longer trains, just as you see some of his anticipated gains from systems like LEADER and TO, or expanded use of distributed power.

That only incidentally involves higher efficiency in the diesel prime movers: for that, you’d need to invoke things like variable-vane or staged turbos, better EFI and what used to be called FADEC, better tribology and ring/bore friction reduction… and on the negative side, efficiency reductions related to such details as idiot EGR and DPFs that are ‘cleaned’ regeneratively.

I’m sure there will be papers in SAE somewhere about post-2012 discussions of heavy diesel combustion-engine efficiency.

OM, I understand what you’re saying, but at some point one would also have to consider the effect of other design changes to locomotives outside the prime mover on overall efficiency. Current EMD/GE models, thanks to advances in wheel slip control, adhesion, and AC traction motors can get much more of the electrical current they generate to the rail to move tonnage. Early 2nd generation models, such as the SD45 and GP40, had to derate themselves at maximum load at low speeds to avoid massive wheel slip. I’m sure that early 1st generation models had even lower adhesion values and more primitive wheel slip controls. Somehow all of this should get factored in, albeit with apples-to-apples comparsions between similar train types and operating speeds.

Yep. I’m sure mechanical engineers and railroad engineers have entirely different sets of criteria.

ns145, you are correct of course, but the thread is only on the efficiency improvements in the internal-combustion prime mover.

Not really. The OP included the word “locomotive” in his question, so that would include more than the engine itself.

If we are deciding to compare overall locomotive efficiency against ‘first generation’ all the points you made are valid.

Intermediate improvements not explicitly mentioned are the introduction of traction alternators on nominally DC locomotives, the introduction of practical ‘creep control’ and other traction enhancement on DC motors, and the list of practical improvements in the dash-2 series for EMD and thd general improvements and QC in the late -8 and -9 GEs.

Not everything was forward. We might examine 6000hp locomotives in North America, the wretched electronic implementation in the EMD 50-series, and the phenomenon that was the Republic Starships for the ‘progress is not always forward’ discussion corner.

Progress is rarely if ever a straight line. Not every ‘new’ idea actually works the way it was intended to work. Such is humanity and its path form the caves and savannas of 100K years ago.

A locomotive is much more than just the operation of its prime mover.

First time I’ve heard LEADER or Trip Optimizer referred to as “smart” systems. Definitely something you won’t hear from anyone that uses them.

Jeff

Note that I put the word in quotes. The current train management systems are ‘smart’ in the sense smart houses and smart bombs are.

And remember that the smart in ‘whip-smart’ has two meanings… [:O]

With respect to improving efficiency, I was reading a couple of articles about a company that is using a “commutating pole” to drastically reduce switching loss in inverters. They claim their prototype board is capable of achieving a peak of 99.5% efficiency in converting 600 to 800 VDC to three phase AC.

I’ve bolded the “switching loss” as the active devices (e.g. IGBT’s, FET’s, SiCFET’s, GaNFET’s, GTO’s, etc) experience two kinds of loss. One is conduction loss, caused either by the voltage drop due to Rds_on in the various FET’s or the VCE_ON for IGBT’s and GTO’s. The other loss is switching, with much of that loss involved in charging or discharging device capacitances and it what the new technology promises to drastically reduce with switched capacitance (this is not snake oil).

Note that the real benefit from increasing efficiency from 99% to 99.5% is not the savings in energy, but instead is in the reduction in heat that must removed by coolants.

What makes those inverters ‘new’?

https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1144&context=ecetr

Coming up with a control algorithm that will adjust timing to accomodate changes in the circuit parameters. Examples include juncton temperature affecting rise and fall times, with corresponding changes required dead time. One difference between then and now is that real time processing power to do a cycle by cycle adjustment in timing is a lot more affordable now that back then. Another difference is using SiC or GaN FET’s, nether of which have the turn off current tail from IGBT. The 300kW demo inverter uses USiC (now part of Qorvo) cascode JFET’s.

I came across a similar situation with an H-bridge using SiC FETs, looking at the cicuit response while adjusting deadtime - saw that increasing deadtime a bit allowed the slightly inductive load to commutate the H-bridge - the “inductive current” would charge the output capacitance of the device that was turned off, and the other device in the totem pole could be turned on with zero voltage across it.

The reason why eliminating switching loss is a big deal is that otherwise the design would need to find a compromise between switching and conduction losses. Conduction losses can be reduced by going to a larger die, but that comes at the expense of increased Drain - Source capacitance, which then entails more energy each time a device switches. Eliminating switching losses also allows operation at a higher frequency, which makes fitering easier.

I’ll have to pass the link to my son at Purdue as one of his housemates is working on Purdue’s DC House. Yes, I did see the purdue.edu in the link…

The story of the commutating pole for inverters reminds of a story about high power low distortion class A + AB audio amplifier stage.