Ram-Compression / Ice-Buildup

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

Zipper730

Chief Master Sergeant
4,430
1,025
Nov 9, 2015
I was told that carburetor intakes that exploited ram-compression tended to have a greater odds of ice-buildup: I'm curious how that's so as compressing airflow usually causes it to heat up (however little with the speeds mentioned) rather than cool down.
 
When you're using ram air you're getting air direct from the outside, colder air, and usually containing more moisture , than warmer, drier air taken in under the engine cowling.

Just look under any car hood, during colder weather the air fed to the induction system gets warmed up, while in warmer weather it will take a more direct route, with no heat assist..

I used to have a car that had a ram air system, it did give you a little more power on hot summer days. But drive it into a rain storm and the engine about wouldn't run at all. And during the winter I had to deactivate it altogether.
 
Last edited:
Generally in the piston pounders of the era the carb inlet lips were not deiced. The restriction of the inlet by ice, and also the possible ingestion of ice can have deleterious effects. Some of the DHC2 Beavers (upper induction) do have inlet heat that is applied simultaneously with the application of carb heat.

For turbine engines the inlet deicing is quite critical as ice ingestion can damage the turbine blades. Turboprops often incorporate some sort of inertial separator to allow ice chunks to bypass the compressor section.
 
Grumman said that the F6F was slower than the F4U because the Hellcat routed the air through the front of the cowl in order to deice it while the F4U took it in at the wing root and provided a more direct path. Note, though that on the F8F they took the air in at the wing root.

Of course Grumman also said that most of the speed difference between the Hellcat and Corsair was due to ASI error on the F4U.
 
The other difference was that on the F4U the air could be routed to bypass the auxiliary supercharger and go directly to carb giving better ram effect (at low altitude only) while on the F6F the air always went through the auxiliary supercharger, wither it was clutched in and spinning or not.
On the F4U once the auxiliary supercharger was engaged the duct doors/flaps changed and the air went through the aux supercharger.
 
Yes, and I am amazed that the F6F and F4U pilots had time to fly and fight with all that switching going on. At least the P-61 pilots had more time for that kind of thing. An FM-2 pilot said he never saw an IJN airplane above 15K ft and wonders why anyone would bother with two stage superchargers. The F7F and F8F went to single stage; by that time the USN was satisfied that the only high altitude bombers were from the USAAF and they did not have to win the Battle of DC by defeating the Y1B-17 in war-games. Joke was on them, though; jets, their own Bat missile, the V-1 and V-2 and nukes changed the whole picture in a big way.
 
Last edited:
colder air, and usually containing more moisture , than warmer, drier air

Exactly the opposite. Cold air contains very little water vapor relative to warmer air. That's why clothes/hair/etc. driers use warm air rather than cold and relative humidity levels in your home drop severely during cold weather.

Back in Ye Olden Times when cars had carburetors occasionally the engine would start just fine cold, jump up on the fast idle cam of the carburetor and only a few minutes later experience the RPM dropping, the engine chugging and blowing black smoke from the tailpipe and eventually stall. By the time you coasted to the side of the road, kicked the transmission into neutral and twisted the key, the problem had miraculously healed itself and all was fine. It's called carburetor icing, and it's a real devil to eradicate.

Carburetors can form ice in them because they rely on a venturi in their design. Venturis are tube-like passages that are wide at one end, taper somewhere in the middle and then eventually widen back to their original dimension. The purpose of the venturi in a carburetor is to create a low-pressure region by increasing the speed of the air flowing through it. The low-pressure region, working in conjunction with the atmospheric pressure acting on the fuel in the float bowl, allows a carburetor to feed fuel to the engine. As a side effect of the air going through the venturi, the temperature drops substantially. If cold, damp air is added into the mix, the throat of a carburetor becomes an ice-making machine.
Often referred to as carburetor or intake-system icing, this phenomenon can still happen with electronic fuel injection, but since there is no venturi in EFI, the occurrence is rare. In the case of EFI, the restriction of the throttle plate(s) causes a cooling effect, and if the air is laden with moisture, ice may form.

The conditions required for carburetor icing to occur are moist air with a temperature as low as 13 degrees Fahrenheit and as warm as 55 degrees F. In most instances, once the ambient air temperature gets down in the low teens, the propensity for ice to form is greatly reduced so as to be almost nonexistent. At lower temperatures and humidity levels, there is just not enough water in the air for the problem to manifest itself.

Compounding the icing problem is the fact that carburetors are devices that convert liquid to vapor. Whenever you've gotten gasoline or alcohol on your hands, you've probably noticed how cool they feel as the liquid evaporates. Likewise, within the carburetor as fuel transitions from a liquid state to a vapor state, there is a cooling effect on the air inside. This, along with the cooling phenomenon of air flowing through the venturi can cause carburetor icing to occur in warmer than freezing temperatures.
 
I believe that the favorite temp for carb ice is 68F. However according to a chart put out by Transport Canada, if you have temps a bit above freezing and visible moisture - such as you can see coming off of melting snow, that is the worst condition at all - and the one where I got some once.

The other week in FL we had temps in the mid-40's and I thought I would never get that C-85 started. Finally resorted to the old trick of opening the primer and leaving it open until the engine got running.
 


I would note a couple of things.
1. the Wright engine used in the FM-2 had a FTH of 17,000ft compared to earlier Cyclones making the same power at 13,500ft. Two stage P & W used in the F4F-4 and the FM-1 made 40 hp more 1400ft higher than the FM-2 engine.
2. Likewise the F8F and F7F used models of the R-2800 that had different superchargers than the earlier single stage engines and made the same power 2500ft higher than old single stage engines.
 
People speak of foggy, wet conditions as having "Heavy air."

But wet air is lighter in density than dry air, and thus less useful for engines and for wings and props.

Few people besides mechanical engineers and meteorologists realize this. I have had a very intelligent Civil Engineer tell me that can't be right.
 
Let's begin with the basics: As altitude increases, the air grows thinner. At 18,000 feet, for example, air density is half of what it is at sea level. Lower air density reduces air resistance, so level speed increases for a given engine power as an aircraft climbs. However, declining air density reduces the oxygen the engine needs to burn fuel. Up to some altitude, reduced air resistance is more important, and an aircraft's level speed increases as altitude increases. Beyond this altitude, however, power loss dominates, and level speed decreases. This altitude is called the "critical altitude."

To compensate for reduced air density at higher altitudes, aircraft engines in World War II used forced induction, which compressed the air before mixing it with fuel. Thinning air would eventually win out BUT the aircraft's new critical altitude was considerably higher thanks to forced induction, and performance at altitude was considerably better.

In addition forcing even more air into the cylinders would allow more fuel to be fed in as well, increasing engine power beyond its natural sea-level value. This would boost engine performance even at low altitudes. Forced induction in World War II produced up to twice normal atmospheric pressure at the manifold thus forcing twice as much air into the engine. Of course, engines had to be built much stronger to deal with the higher power, and engines now needed gasoline that was 100 octane or higher because, in accordance with the Ideal Gas Law, the denser air is also hotter. If this hot air is mixed with gasoline and fed directly into the cylinder, the engine might be able to handle it. However, beyond some point, the hot charge will detonate prematurely, reducing engine efficiency or even damaging the engine. If the detonation problem is not too great, the aircraft can use higher-octane gasoline which burns more slowly. In the 1930s, a typical rating for aviation gasoline was 87 octane. Many countries continued to use 87-octane gas during the war. Thanks to pre-war work at Shell Oil that involved Jimmy Doolittle, the United States entered the war with 100-octane avgas, although producing and distributing this high-quality fuel to combat units often fell behind needs. Theoretically, gasoline cannot go above 100 octane, which is effectively pure octane. However, performance can still be increased by adding other substances. The United States produced a considerable amount of 130 "performance number" gasoline during the war.
A more comprehensive way to reduce detonation is to use an intercooler, which is essentially a radiator. The drawback here is that radiators are heavy, and an intercooler requires the addition of tubing between the three devices involved.
Another way to reduce detonation for brief periods of time was water injection. Water (usually mixed with methanol) was injected into the engine with the air/fuel charge. This reduced the temperature considerably, making the air even denser. However, the added coolant disrupted combustion to some extent and the charge did not burn completely causing black smoke. This disruption must be very brief or the engine would stop working. Consequently, water injection was used only for brief periods of time, such as take-off or emergency combat boost.

Now consider the device which compresses the air fed to the engine.There are two types of forced induction devices. They differ in how the impeller's rotation is powered.
The first is the supercharger, in which the engine itself powers the impeller via a short drive shaft.
The second type of forced induction device is the turbocharger. In turbos, the impeller is powered by the engine's exhaust. High-performance aircraft engines produce very hot, fast, high-pressure exhaust gas (rear facing exhaust manifolds actually gave extra thrust that might boost the aircraft's forward speed by five miles an hour or more). The hot, fast, high pressure exhaust gasses spin a turbine, which then drives the impeller.

In theory, turbochargers are better than superchargers. A supercharger "steals" power from the engine, lessening its benefit. For example, if a supercharger increases engine performance by 200 horsepower but requires 50 horsepower from the engine to drive its impeller, the net gain will only be 150 horsepower. In contrast, the exhaust gas that drives turbochargers is essentially free energy. This is not entirely true, but back pressure from the turbine does not substantially reduce engine power (as air pressure decreases, in fact, back pressure from the atmosphere itself decreases; the exhaust actually becomes faster as altitude increases.)

Unfortunately, it is difficult to handle pressurized, fast, and very hot exhaust gas. Special materials were needed, and these were underdeveloped during World War II. Turbocharger production was difficult throughout the war, and reliability was a frequent issue. World War II was a few years too early for turbos in terms of materials science.

Turbochargers also require a great deal of heavy tubing to contain the hot gas flows. This tubing carries the hot engine exhaust to the turbocharger and delivers the compressed air back to the engine. When an intercooler is used, the amount of plumbing increases considerably. In a bomber, there is room for this piping. In fighters, the amount of piping needed is a serious design issue.

Now we come to the speed at which the compressor is turning and the staging of the compressors.
All combat aircraft in World War II used superchargers. However, a single stage of supercharging was effective only up to about 16,000 feet. One solution was to use two stages of supercharging. Each stage had its own impeller, diffuser, and horn. These were placed in series, the first stage feeding into the second. At lower altitude, the second stage was bypassed by the pilot to prevent overboost. As the first stage became insufficient during a climb, the second stage was kicked back into the flow.

Although combat aircraft all used supercharging for the first stage of forced induction, some used a turbocharger for their second stage. Given production difficulties, turbocharging was much more expensive. Also, while two stages of supercharging were sometimes possible without an intercooler, turbocharging almost always required an intercooler.

Another way to deal with the need for very different pressure boosts at lower and higher altitudes was to give superchargers multiple speeds. This required a gear and clutch assembly controlled by the pilot. Some British Merlin superchargers had three speeds. The German DB 601 and DB 605 engines used in most Bf 109s carried this trend to the logical extreme. Using fluid coupling with the engine, their superchargers could vary boost smoothly over a considerable range. These adjustments, furthermore, were handled automatically by a barometric-based control. This freed the fighter pilot to concentrate on his opponent.

Of course, having two or more speeds does not rule out also having two stages. Some of the later Merlin engines had two supercharger stages, each with three speeds. For the Fw 190, the BMW 801R under development at the end of the war had a two-stage, four-speed supercharger.
 
But wet air is lighter in density than dry air, and thus less useful for engines and for wings and props.
Most definitely consider Oxygen is a diatomic molecule (two Oxygen atoms) weight 32amu
Nitrogen is also diatomic weight 28amu
Carbon Dioxide is CO2 weight 12 + 32 or 44amu
Now the water molecule H2O is just 2 + 16 or 18amu

Any water vapor that gets added replaces either nitrogen or oxygen in our free-moving air. And water vapor molecules are lighter than both nitrogen and oxygen. In other words, humid air is going to have less heavy nitrogen and oxygen -- and lighter hydrogen and oxygen -- in its place. Remember that they have the same number of molecules, but the air with water vapor is simply less dense.

If temperature and pressure are the same, dry air will be heavier because it lacks the lighter water vapor molecules. And keep in mind that it's the interplay between dry and humid air that causes big storms. When dry, dense air moves under moist, light air, it lifts the moist air and makes the right conditions for thunderstorms since temperature decreases with height. As the temperature drops the water condenses as very small drops (clouds) which grow bigger blocking light (dark storm clouds) and eventually get big enough to fall (rain)
 
Last edited:

A fair overwiev, but I have some nitpicks.
Forced inducton produced up to 3 ata worth of boost, vs. 1 ata normal air pressure at SL. Radiators might appear heavy, but % or their weight vs. total A/C weight was negligible. Water/alcohol injection (or ADI) was benefiting the engine working, owever it was rarely used during take off. Probabyl never on US A/C. War emergency power with ADI was allowed for longer durations than without ADI.




Turbos in ww2 went to became very mature pieces of engineering during the ww2, and worthwhile additions to Allied war cause. Turbochager production was easy for ww2 USA, not difficult.
Thrust pushes aircraft ahead. Engines without turbos have benefit of exhaust thrust (part of total thrust), unlike most of the turboed engines. Non-turbo engines also benefit from ram effect much more than turboed engines. So if the engine+prop is supposed to deliver 800 lbs worth at 20000 ft, the actual values might be 900 lbs (with 100 lbs worth of exhaust thrust) at 24000 ft (4000 ft gain due to ram).


1st stage could be de-clutched on P&W 2-stage engines, air re-routed around it in that case. 2nd stage was alway on. P&W and Allison called 1st stage 'auxiliary', 2nd stage was 'main'.
Engines with 2 stages of supercharging from RR, Jumo and DB, have had both S/C stages installed on same shaft. Hence there was no possibility to de-clutch any stage independently. 2-stage V-1710s were also without possibility to de-clutch 1st stage.


There was no 3-speed SC gear for any Merlin, apart from possible prototypes. I'd be wary of any claims that this or that engines' S/Cs have had 4-speeds.
3-speed superchargers were used on Jumo 213E and F, and post-war Griffons.
 
Maybe I worded that badly, most of us on this forum are probably aware cold air holds less moisture than warm air, but when you're intaking vast quantities of it to feed a engine it doesn't take much to run into carb ice. Heating the air up before induction helps.
And most carb ice incidents happened during the landing phase of flight, or lower rpm cruise flight, when ram air induction, if you had it, wouldn't be very effective.

I don't think I needed a lecture on carbs and icing, I'm 71 and grew up with that type of engine,
Still own one, a 82 Jeep J10, use it to tow race cars in the summer, and haul wood in the winter.
 
There was no 3-speed SC gear for any Merlin, apart from possible prototypes.
Tomo my source text stated:
Merlin engines that powered the Battle of Britain Spitfires had single-stage, single-speed superchargers. It was not until the Merlin Mk XX that a second speed was added, but not a second stage. When Stanley Hooker took over supercharger design at Rolls-Royce, he realized that air flows in the existing Merlin superchargers were imperfect. He improved them, and this resulted in a new single-stage two-speed supercharger for the Merlin Mk 45. This new design allowed output to be raised to 1,515 hp at 11,000 feet. The Royal Air Force put this new engine into the Spitfire Mk V airframe just in time to battle the new Bf 109F, which began to appear in large numbers in early 1941.

The arrival of the Fw 190 in late 1941 made even these engines obsolete. Fortunately, Rolls-Royce was ready with a two-stage, two-speed supercharger for its engines, beginning with the Mk 60 series. These engines powered the Spitfire Mk IX, which restored British parity with the best German fighters. These two-stage supercharged Merlins came considerably later than the two-stage R-1830. In compensation, this delay allowed the Mk 60 and engines to have not only two stages but also two speeds and eventually three speeds for greater pilot control.

I've done more checking and can find nothing to corroborate this statement EXCEPT:

The Griffon 60, 70, and 80 series featured two-stage supercharging and achieved their maximum power at low to medium altitudes. The Griffon 101, 121, and 130 series engines, collectively designated Griffon 3 SML, used a two-stage, three-speed supercharger, adding a set of "Low Supercharger (L.S)" gears to the already existing Medium and Full Supercharger (M.S and F.S) gears.

So I wonder if the first author confused Merlin and Griffon??
 

Who is the author of the source text?
Hooker's redesigned supercharger (his main work was done on the inlet, actually) was 1st introduced on Merlin 20, a.k.a. Merlin XX in roman numerals. 1 stage, 2 speed S/C, used in 1940 on Hurricane II during the later stages of the BoB. Merlin 45 featured 1 S/C speed only - we can view it as a 'simplified Merlin XX' - and came some 6 months later.
Merlin XX and 45 were hardly obsolete by 1941, what was obsolete was horribly out-dated float-type caburetor and either too draggy exhausts (on fighters) or single manifold exhausts (on bombers). Add the aerodynamic imperfections of Spitfire (external BP glass, fixed tailwheel, non covered wheel wells, lousy fit & finish) and Hurricane (thick, big wing of outdated profile, airbrake-layout of radiators) and there was no wonder that Fw 190 ruled the day.
The supposed delay and thus benefit via the available number of S/C speeds have nothing in common. Merlin 60 appeared in a data sheet for Wellington VI dated 14th August 1941, that is hardly 'considerably later than two stage R-1830', that offered no better power than 1-stage Merlin XX from mid-1940 BTW. That British didn't used Merlin 60 on Spitfires operationally ASAP is another thing. Merlin 61 was made instead for Spitfires. Spit IX also introduced new-gen carbs, better exhausts and better fit & finnish, and not much later the internal BP glass.
As before - no 3-speed S/Cs for service-worthy Merlins.
 
MIflyer said:
Grumman said that the F6F was slower than the F4U because the Hellcat routed the air through the front of the cowl in order to deice it while the F4U took it in at the wing root and provided a more direct path.
I'm confused here
  1. The F6F's intercooler, oil-cooler, and carburetor intake were mounted under the cowling
  2. Heating up air increases pressure last I checked (Meredith effect was based on this)
 
Who is the author of the source text?
Bent, Ralph D. and McKinley, James L., Aircraft Powerplants, 5th Edition, McGraw-Hill, 1985.

Edgars, Julian, A Guide to Turbos and Superchargers: A Comprehensive Guide to Forced Induction, Clockwork Media Pty Ltd., 2001.

White, Graham, Allied Aircraft Piston Engines of World War II: History and Development of Frontline Aircraft Piston Engines Produced by Great Britain and the United States, Society of Automotive Engineers, 1995.

Wisniewski, Jason R., Powering the Luftwaffe: German Aero Engines of World War II, FriesenPress, 2013.
 

Users who are viewing this thread