Allied vs Axis radar....?

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

Lucky13

Forum Mascot
47,780
24,140
Aug 21, 2006
In my castle....
What was it that made the Luftwaffe use antennas instead for a dish like RAF and USAAF....was it the radar itself or??
 
At a first guess, the German air-borne radar operated at longer wavelengths, where a dish that can fit into an airplane isn't particularly useful.


Addendum: according to the wikipedia article (I know....), the German radars operated at wavelengths of 1.6 to about 4 m; Allied air-borne radars at 10 cm. Resolution, in radians, is about equal to antenna size divided by wavelength, (the actual formula is sine of the resolution is approximately equal to wavelength / antenna size, so to get 1 degree resolution with a 1.6 m wavelength, your antenna needs to be about 91 m across)
 
Last edited:
Swampyankee pretty much hit the nail on the head.

By "antennas" instead of dishes I assume you mean the dipole and Yagi configurations as seen with the Lichtenstein radar. A dish is also an antenna, just a different configuration. In general you see "antennas" (dipoles and Yagis for example) used when a radar operates at mid UHF and below frequencies (HF/VHF/UHF), and dishes used from mid UHF and up.

Early in and just prior to WW II both Allies and Axis used dipole and Yagi configured antennas. For example the AN/SCR-521's on the B-17, B24, and B-25. This is because most radars of the time were mid UHF and down, call it 900 MHz and down, although that is not a hard limit. The tube technology of the day just was not up to the task of making high peak power at frequencies much above that.

In 1940 the British came up with the first working resonant cavity magnetron (actually the Russians did it a year before the British, but that fact was not known in the West until after the British development, and the Russians took much longer to capitalize on the technology), making high power at much shorter wavelengths, microwaves, possible. The British did not have the time or resources to mature the cavity magnetron technology so they transferred it to the Rad Lab at MIT in the US. The Rad Lab, working with British partners, rapidly improved the cavity magnetron and surrounding technologies, making microwave radar a working fact by early 1941, and a fielded system out by late 1941.

Initially the new magnetrons worked in the 10 cm (3000 MHz) frequency range, but very rapidly the technology was pushed to 6 and 3 cm (5000 and 10000 MHz).

In radar use as frequency goes up gain for a given size (aperture) antenna goes up. As gain increases angular resolution also increases and side and back lobes decrease. With greater gain and angular resolution you can detect targets at further ranges (other factors being equal) and differentiate targets in closer proximity to each other. With reduced side lobes and back lobes noise from directions other than the desired direction is decreased, including intentional noise, such as jammers, leading to higher probability of detection in an active denial environment.

The Axis powers knew the advantages of higher frequencies, however they never developed a working compact high power source like the cavity magnetron. The Japanese did develop a multi cavity resonant magnetron, but it was large and bulky and does not seem to have made it into radar use in WW II.

Side notes on the Rad Lab. During WW II they came up with almost half of all radars developed by all players in the field. The 28 volume MIT Radiation Lab Series, written during and immediately following WW II, became THE corner stone of radar development for the next couple of decades, World wide. Much of the data in that series was considered classified information until it was published starting in 1947. While it is available electronically now I have had a print copy in my library for the last couple decades.

T!
 
Last edited:

Users who are viewing this thread

Back