Inicio » Analysis
Category Archives: Analysis
Waveguides are transmission lines widely used in very high frequency applications as guided propagation devices. Their main advantages are the reduction of losses in the propagation, due to the use of a single conductor and air, instead of using dielectrics as in the coaxial cable, a greater capacity to use high power and a simple building. Their main drawbacks are usually that they are bulky devices, that they cannot operate below their cutoff frequency and that the guide transitions to other technologies (such as coaxial or microstrip) have often losses. However, finite element method (FEM) simulation allows us to study and optimize the transitions that can be built with these devices, getting very good results. In this post we will study the waveguides using an FEM simulator such as HFSS, which is able to analyze tridimensional electromagnetic fields (3D simulation).
Waveguides are very popular in very high frequency circuits, due to the ease of their building and their low losses. The propagated fields, unlike the coaxial guides, are electric or magnetic transverse (TE or TM fields), so they have a magnetic field component (TE) or electric field (TM) in the propagation direction. These fields are the solutions for the Helmholtz equation under certain boundary conditions
and solving these differential equations by separation of variables, and applying the boundary conditions of a rectangular enclosure, where all the walls are electrical walls (conductors, in which the tangential component of the electric field is canceled)
we can obtain a set of solutions for the electromagnetic field inside the guide, starting from the solution obtained for the expressions shown in fig. 1.
Therefore, electromagnetic fields are propagated like propagation modes, called TEmn, for the transverse electric (Ez=0), or TMmn, for the transverse magnetic (Hz=0). From the propagation constant Kc is got an expression for the cutoff frequency, fc, which is the lowest frequency for propagating fields inside the waveguide, which expression is
The lowest mode is when m=0, since although the function has extremes for m,n=0, the modes TE00 or TM00 do not exist. And like a>b, the lowest cutoff frequency of the waveguide is for the mode TE10. That is the mode we are going to analyze using a 3D FEM simulation.
SIMULATION OF A RECTANGULAR WAVEGUIDE BY THE FINITE ELEMENTS METHOD
In a 3D simulator it is very easy to model a rectangular waveguide, since it is enough to draw a rectangular prism with the appropriate dimensions a and b. In this case, a=3,10mm and b=1,55mm. The TE10 mode start to propagate at 48GHz the next mode, TE01, at 97GHz, then the waveguide is analyzed at 76GHz, frequency in which it will work. Drawing the waveguide in HFSS, it is shown so
The inner rectangular prism is assigned to vacuum, and the side faces are assigned perfect E boundaries. Two wave ports are assigned on the rectangles at -z/2 and +z/2 , using the first propagation mode. The next figure shows the E-field along the waveguide
Analyzing the Scattering parameters from 40 to 90GHz, it is got
where it can be seen that the first mode starts to propagate inside the waveguide at 48,5GHz.
From 97GHz, TE01 mode could be propagated too, it does not interest us, then the analysis is done at 76GHz.
The most common transitions are from waveguide to coaxial, or from waveguide to microstrip line, to be able to use the propagated energy in another kind of applications. For this, a probe is placed in the direction of the E-field, coupling its energy on the probe. (TE01 mode is in Y-axis)
The probe is a quarter wavelength resonant antenna at the desired frequency. In X-axis, E-field maximum value happens at x=a/2, while to find the maximum in Z-axis, the guide is finished in a short circuit. So, E-field is null on the guide wall, being maximum at a quarter guide wavelength which is
and in our case, at 76GHz, λ is 3,95mm and λg, 5,11mm. Then, the probe length will be 0,99mm and the shortcircuit distance, 2,56mm.
In coaxial transitions, it is enough to put a coax whose internal conductor protrude λ/4 at λg/4 from the shortcircuit. But in microstrip transitions dielectrics are used as support of the conductor lines, then it should be kept in mindpor the dielectric effect, too.
Our transition can be modeled in HFSS by assigning different materials. The probe is built on Rogers RO3003 substrate, with low dielectric constant and losses, making the transition to microstrip. The lateral faces and the lines are assigned to perfect E boundaries, and form of the substrate, to a RO3003 material. The waveguide inside and the transition cavity is assigned to vacuum. In the extreme face of the transition, a wave port is assigned.
Now, the simulation is done analyzing the fields and S parameters.
and it can be seen how the E-field couples to the probe and the signal is propagated along the microstrip.
Seeing the S parameters, we can see that the least loss coupling happens at 76÷78GHz, our working frequency.
OTHER DEVICES IN WAVEGUIDES: THE MAGIC TEE
Among the usual waveguide devices, one of the most popular is the Magic Tee, a special combiner which can be used like a divider, a combiner and a signal adder/subtractor.
Its behavior is very simple: when an EM field is fed by port 2, the signal is divided and in phase by ports 1 and 3. Port 4 is isolated because its E-plane is perpendicular to the port 2 E-plane. But if the EM field is fed by port 4, it is divided into ports 1 and 3 in phase opposition (180deg) while port 2 is now isolated.
Using the FEM simulation to analyze the Magic Tee, and feeding the power through port 2, it is got the next response
and the power is splitted in ports 1 and 3 while port 4 is isolated. Doing the same operation from port 4, it is got
where now port 2 is isolated.
To see the phases, it is used a vector plot of the E-field
where it is seen that the field in ports 1 and 3 has the same direction and therefore they are in phase. Feeding from port 4
in which it is seen that the signals in port 1 and 3 has the same level, but in phase opposition (180deg between them).
FEM simulation allows us to analyze the behavior of the EM field from different points of view, only changing the excitations. For example, feeding a signal in phase by port 2 and 4, both signals will be added in phase at port3 and will be nulled at port 1.
whereas if inverting the phase in port 2 or port 4, the signals will be added at port 1 and will be nulled at port 3.
and the result is a signal adder/subtractor.
The object of this post was the analysis of the electrical behavior of the waveguides using a 3D FEM simulator. The advantage of using these simulators is that they allow to analyze with good precision the EM fields on three-dimensional structures, being the modeling the most important part to rightly define the structure to be studied, since a 3D simulator requires meshing in the structure, and this meshing, as it needs a high number of tetrahedra to achieve good convergence, also tends to need more machine memory and processing capacity.
The structures analyzed, due to their simplicity, have not required long simulation time and relevant processing capacity, but as the models become more complex, the processing capacity increases, it it is needed to achieve a good accuracy.
In subsequent posts, another methods to reduce modeling in complex structures will be analyzed, through the use of planes of symmetry that allow us to divide the structure and reduce meshing considerably..
- Daniel G. Swanson, Jr.,Wolfgang J. R. Hoefer; “Microwave Circuit Modeling Using Electromagnetic Field Simulation”; Artech House, 2003, ISBN 1-58053-308-6
- Paul Wade, “Rectangular Waveguide to Coax Transition Design”, QEX, Nov/Dec 2006
The Smith Chart is a standard tool in RF design. Developed by Phillip Smith in 1939, it has become the most popular graphic method for representing impedances and solving operations with complex numbers. Traditionally, the Smith Chart has been used as 2-D polar form, centered at an unit radius circle. However, the 2D format has some restrictions when the active impedances (oscillators) or stability circles (amplifiers) are represented, since these ones usually leave the polar chart. Last years, three-dimensional Smith Chart has become popular. Advances in 3D rendering software make it easy to use for design. In this post, I will try to show the handling of the three-dimensional Smith Chart and its application for a low-noise amplifier design.
When Phillip Smith was working at Bell Labs, he have to match one antenna and he was looked for a way to solve the design graphically. By means of the mathematical expressions that define the impedances in the transmission lines, he got to represent the impedance complex plane by circles with constant resistances and reactances. These circles made it easier for him to represent any impedance in a polar space, with the maximum matching placed in the center of the chart and the outer circle representing the pure reactance. Traditionally, Smith’s Chart has been represented in polar form as shown below
The impedance is normalized calculating the ratio between the impedance and the generator impedance. The center of the chart is pure unit resistance (maximum matching) while the peripheral circle that limits the chart is the pure reactance. The left end of the chart represents the pure short circuit and the right end, the pure open circuit. The chart was then very popular to be able to perform calculations for matching networks with transmission lines using a graphical method. However, the design difficulties with the chart happened when active impedances were analyzed, studying amplifiers stability and designing oscillators.
By its design, the chart is limited to the impedances with positive real part, but it could represent, extending the complex plane through the Möbius transformation, impedances with negative real part . This expanded chart, to the negative real part plane, can be seen in the following figure
However,this chart shows two issues: 1) although it allows to represent all the impedances, there is a problem with the complex infinity, so it remains limited and 2) the chart has large dimensions that make it difficult to us in a graphic environment, even in a computer-aided environment. However, the extension is needed when the amplifier stability circles are analyzing, since in most of cases the centers of these circles are located outside the passive impedance chart.
In a graphical computer environment, representing the circles is already performed by the software itself through the calculations, being able to limit the chart to the passive region and drawing only a part of the circle of stability. But with oscillators still have the problem of complex infinity, which could be solved through a representation in a Riemann’s sphere.
The Riemann’s sphere is a mathematical solution for representing the complete complex plane, including infinity. The entire complex surface is represented on a spherical surface by a stereographic projection of this plane.
In this graphic form the southern hemisphere represents the origin, the northern hemisphere represents infinity and the equator the circle of unitary radius. The distribution of complex values in the sphere can be seen in the following figure
So, it is possible to represent any complex number on a surface easy to handle.
SMITH’S CHART ON A RIEMANN’S SPHERE
Since Smith’s Chart is a complex representation, it can be projected in the same way to a Riemann’s sphere , as shown in the following figure
In this case, the northern hemisphere shows the impedances with positive resistance (passive impedances), in the southern hemisphere, the impedances with negative resistance (active impedances), in the eastern hemisphere, the inductive impedances, and in the western one the capacitive impedances. The main meridian shows the pure resistive impedance.
Thus, when we wish to represent any impedance, either active or passive, it can be represented at any point in the sphere, greatly facilitating its drawing. In the same way, we can represent the stability circles of any amplifier without having to expand the chart. For example, if we want to represent the stability circles for one transistor, which parameters S at 3GHz are the next
S11=0,82/-69,5 S21=5,66/113,8 S12=0,03/48,8 S22=0,72/-37,6
its representation in the conventional Smith’s Chart is
while in the three-dimensional chart it is
where both circles can be seen, a fraction in the northern hemisphere and the other one in the south. Thus, its representation has been greatly facilitated.
A PRACTICAL APPLICATION: LOW NOISE AMPLIFIER
Let’s see a practical application of the 3D chart matching the previous amplifier with the maximum stable gain and minimum figure of noise, at 3GHz. Using traditional methods, and knowing the transistor parameters which are the next
S11=0,82/-69,5 S21=5,66/113,8 S12=0,03/48,8 S22=0,72/-37,6
NFmin=0,62 Γopt=0,5/67,5 Rn=0,2
S-parameters are represented in the3D Smith’s chart and the stability circles are drawn. For a better representation 3 frequencies are used, with a 500MHz bandwidth.
It can be seen that S-parameters as well as the stability circles in both the conventional Smith’s chart and 3D one. In the conventional Smith’s chart, the stability circles leave the chart.
One amplifier is unconditionally stable when the stability circles are placed in the active impedance area of the chart, in the southern hemisphere, under two conditions: if the circles are placed in the active region and do not surround the passive one, the unstable impedances are located inside the circle. If the circles surround the passive region, the unstable impedances are located outside the circle.
In this case, since part of the circles enters on the passive impedances region, the amplifier is conditionally stable.Then the impedances that could unstabilize the amplifier are placed inside the circles. This is something that cannot be seen clearly in the three-dimensional chart yet, the app does not seem to calculate it and would be interesting to include in later versions, because it would greatly facilitate the design.
Let’s match now the input for the minimum noise. For this, it is needed to design a matching network to transform from 50Ω to reflection coefficient Γopt, being its normalized impedance Zopt=0,86+j⋅1,07. In the app, opening the design window and writing this impedance
Using now the admittance, we translate in the circle of constant conductance until the real part of the impedance is 1. This is down by estimation and a 0,5 subsceptance is got. It should be increased 0,5 – (- 0,57) = 1.07 and this is a shunt capacitor, 1,14pF.
Now it is only needed to put a component that makes zero the reactance, when the resistance is constant. As the reactance is -1.09, the added value should be 1.09, so that the reactance is zero. This is equivalent to a series inductor, 2,9nH.
Once calculated the input matching network for the lower noise figure, we recalculate the S-parameters. Being an active device, the matching network transforms the S parameters, which are:
S11=0,54/-177 S21=8,3/61,1 S12=0,04/-3,9 S22=0,72/-48,6
and which are represented in the Smith’s chart to get the stability circles.
The unstable regions are the internal regions, so the amplifier remains stable.
Now the output matching network is got for maximum stable gain, and the ouput reflection coefficient S22=0,72/-48,6 should be loaded by ΓL (S22 conjugate), translating from 50Ω to ΓL=0,72/48,6. This operation is performed in the same way that input matching network. By doing the complete matching , S parameters are recalculated, with input and oputput matching networks. These are
S11=0,83/145 S21=12/-7.5 S12=0,06/-72,5 S22=0,005/162
The gain is 20·log(S21)=21,6dB, and the noise figure, 0,62dB (NFmin). Now it is only represented these parameters in the three-dimensional chart to get the stability circles.
In this case, the stable region in the input stability circle is inside and in the otuput stabiliy circle is outside. Due to both reflection coefficients, S11 y S22 are into the stable regions, then the amplifier is stable.
In this entry I had the first contact with the three-dimensional Smith’s chart. The object was to study its potential with respect the traditional chart in microwave engineering. New advantages are observed in this respect in that it is possible to represent the infinite values from the Möbius transform to a Riemann’s sphere and thus having a three-dimensional graphical tool where practically all passive and active impedances and parameters which can be difficult to draw in the traditional chart as stability circles.
In its version 1, the app, which can be found on the website 3D Smith Chart / A New Vision in Microwave Analysis and Design, shows some design options and configurations, although some applications should be undoubtedly added In future versions. In this case, one of the most advantageous applications for the chart, having studied the stability circles of an amplifier, is the location of the stability regions graphically. Although this can be solved by calculation, the visual image is always more advantageous.
The app has a user manual with examples explained in a simple way, so that the designer becomes familiar with it immediately. In my professional opinion, it is an ideal tool for those of us who are used to using Smith’s chart to perform our matching network calculations.
- Müller, Andrei; Dascalu, Dan C; Soto, Pablo; Boria, Vicente E.; ” The 3D Smith Chart and Its Practical Applications”; Microwave Journal, vol. 5, no. 7, pp. 64–74, Jul. 2012
- Zelley, Chris; “A spherical representation of the Smith Chart”; IEEE Microwave, vol. 8, pp. 60–66, July 2007
- Grebennikov, Andrei; Kumar, Narendra; Yarman, Binboga S.; “Broadband RF and Microwave Amplifiers”; Boca Raton: CRC Press, 2016; ISBN 978-1-1388-0020-5
In the previous post, some single examples of the Monte Carlo method were shown. In this post it will be deeply analyzed, making a statistical analysis on a more complex system, analyzing its output variables and studying the results so that they will be quite useful. The advantage of simulation is that it is possible to get a random generation of variables, and also a correlation between variables can be set, achieving different effects in the analysis of the system performance. Thus, any system not only can be analyzed statistically using a random generation of variables, but also this random generation can be linked in a batch analysis or failures in production and in a post-production recovery.
The circuits studied in the previous post were very simple circuits, allowing to see the allocation of random variables and their results when these random variables are integrated a more complex system. With this analysis, it is possible to check the performance and propose corrections which would limit statistically the variations in the final system.
In this case, the dispersive effect of the tolerances will be studied on one of the circuits where it is very difficult to achieve an stability in its features: an electronic filter. An electronic filter, passband type, will be designed and tuned to a fixed frequency, with a certain bandwidth in passband and stopband, and several statistical analysis will be done on it, to check its response with the device tolerances.
DESIGN OF THE BANDPASS FILTER
A bandpass filter design is done, with a 37,5MHz center frequency, 7MHz pass bandwidth (return losses ≥14dB) and a 19MHz stopband bandwidth (stopband attenuation >20dB). When the filter is calculating, three sections are got, and its schematic is
With the calculated values of the components, standard values which can make the filter transfer function are found, and its frequency response is
where it is possible to check that the center frequency is 37.5 MHz, the return losses are lower than 14dB at ± 3.5Mhz of the center frequency, and the stopband width is 18,8MHz, with 8,5MHz from the left of the center frequency and 10,3MHz to the right of the center frequency.
Then, once the filter is designed, a first statistical analysis is done, considering that the capacitor tolerance is ± 5% and the inductors are adjustable. In addition, there is not any correlation between the random variables, being able to take an random value independently.
STATISTICAL ANALYSIS OF THE FILTER WITHOUT CORRELATION BETWEEN VARIABLES
As it could be seen in the previous post, when there are random variables there is an output dispersion, so limits to consider a valid filter must be defined, from these limits, to analyze its valid frequency response. Yield analysis is used. This is an analysis using the Monte Carlo algorithm that it allows to check the performance or effectiveness of the design. To perform this analysis, the limits-for-validation specifications must be defined. The chosen specifications are return losses >13,5dB at 35÷40MHz, with a 2 MHzreduction in the passband width and an attenuation >20dB at frequencies ≤29MHz and ≥48MHz. By statistical analysis, it is got
whose response is bad: only 60% of possible filters generated by variables with a ±5% tolerance could be considered valid. The rest would not be considered valid by a quality control, which would mean that 40% defective material should be returned to the production, to be reprocessed.
It can be checked in the graph that the return loss are the primarily responsible for this bad performance. What could it be done to improve it? In this case, there are 4 random variables. However, two capacitors have of the same value (15pF), and when they are assembled in a production process, usually belong to the same manufacturing batch. If these variables show no correlation, variables can take completely different values. When they are not correlated, the following chart is got
However, when these assembled components belong to the same manufacturing batch, their tolerances vary always to the same direction, therefore there is correlation between these variables.
STATISTICAL ANALYSIS OF THE FILTER WITH CORRELATION BETWEEN VARIABLES
When the correlation is used, the influence of tolerances is decreased. In this case, it is not a totally random process, but manufacturing batches in which the variations happen. In this case, it is possible to put a correlation between the variables C1 and C3, which have the same nominal value and belong the same manufacturing batch, so now the correlation graph is
where the variation trend in each batch is the same. Then, putting a correlation between the two variables allows studying the effective performance of the filter and get
that it seems even worse. But what happens really? It must be taken into account that the variable correlation has allowed analyzing complete batches, while in the previous analysis was not possible to discern the batches. Therefore, 26 successful complete manufacturing processes have been got, compared to the previous case that it was not possible to discern anything. Then, this shows that from 50 complete manufacturing processes, 26 processes would be successful.
However, 24 complete processes would have to be returned to production with the whole lot. And it remains really a bad result. But there is a solution: the post-production adjustment.
STATISTICAL ANALYSIS WITH POST-PRODUCTION ADJUSTMENT
As it was said, at this point the response seems very bad, but remembering that the inductors had set adjustable. What happens now? Doing a new analysis, allowing at these variable to take values in ±10% over the nominal value, and setting the post-production optimization in the Monte Carlo analysis and voilà! Even with a very high defective value, it is possible to recover 96% of the filters within the valid values.
So an improvement is got, because the analysis is showing that it is possible to recover almost all of the batches with the post-production adjustment, so this analysis allows showing not only the defective value but also the recovery posibilities.
It is possible to represent the variations of the inductors (in this case corresponding to the serial resonances) to analyze what is the sensitivity of the circuit to the more critical changes. This analysis allows to set an adjustment pattern to reduce the adjustment time that it should have the filter.
So, with this analysis, done at the same time design, it is possible to take decisions which set the patterns of manufacturing of the products and setting the adjustment patterns for the post-production, knowing previously the statistic response of the designed filter. This analysis is a very important resource before to validate any design.
In this post, a more grade in the possibilities of using Monte Carlo statistical analysis is shown, using statistical studies. The algorithm provides optimal results and allows setting conditions for various analysis and optimizing more the design. Doing a post-production adjustment, it is possible to get the recovery grade of the proposed design. In the next post, another example of the Monte Carlo method will be done that allows seeing more possibilities over the algorithm.
- Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
- Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid, Alianza Editorial, 2001, ISBN 84-206-8696-4.
- Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.
When any electronic device is designed, we can use several deterministic methods for calculating its main parameters. So, we can get the parameters that we measure physically in any device or system. These preliminary calculations allow the development and their results are usually agreed with the prediction. However, we know that everything we manufacture is always subject to tolerances. And these tolerances cause variations in the results that often can not be analyzed easily, without a powerful calculation application. In 1944, Newmann and Ulam developed a non-deterministic, statistical method called Monte Carlo. In the following blog post. we are going to analyze the use of this powerful method for predicting possible tolerances in circuits, especially when they are manufactured industrially.
In any process, the output result is a function of the input variables. These variables generate a response which can be determined, both if the system is linear and if it is not linear. The relationship between the response and the input variables is called transfer function, and its knowledge allows us to get any result concerning the input excitation.
However, it must be taken in account that the input variables are random variables, with their own distribution function, and are subject to stochastic processes, although their behavior is predictable through the Theory of Probability. For example, when we make any measure, we get its average value and the error in which can be measured that magnitude. This allows to limit the environment in which it is correct and decide when the magnitude behaves incorrectly.
For many years, I have learned to successfully transform the results obtained by simulations in real physical results, with predictable behavior and I got valid conclusions, and I have noticed that in most cases the use of the simulation is reduced to get the desired result without studying the dependence of the variables in that result. However, most simulators have very useful statistical algorithms that, properly used, allow to get a series of data that the designer can use in the future, predicting any system behavior, or at least analyzing what it can happen.
However, these methods are not usually used. Either for knowledge lack of statistical patterns, or for ignorance of how these patterns can be used. Therefore, in these posts we shall analyze the Monte Carlo method on circuit simulations and we shall discover an important tool which is unknown to many simulator users.
DEVICES LIKE RANDOM VARIABLES
Electronic circuits are made by simple electronic devices, but they have a statistical behavior due to manufacturing. Device manufacturers usually show their nominal values and tolerances. Thus, a resistance manufacturer not only publishes its rating values and its dimensions. Tolerances, stress, temperature dependance, etc., are also published. These parameters provide important information, and propertly analyzed with a powerful calculation tool (such as a simulator), we can predict the behavior of any complex circuit.
In this post, we are going to analyze exclusively the error environment around the nominal value, in one resistor. In any resistor, the manufacturer defines its nominal value and its tolerance. We asume these values 1kΩ for the nominal value and ± 5% for its tolerance. It means the resistance value can be found between 950Ω and 1,05kΩ. In the case of a bipolar transistor, the current gain β could take a value between 100 and 600 (i.e. NXP BC817), which may be an important and uncontrollable variation of current collector. Therefore, knowing these data, we can analyze the statistical behavior of an electronic circuit through the Monte Carlo method.
First, let us look resistance: we have said that the resistance has a ± 5% tolerance. Then, we will analyze the resistor behavior with the Monte Carlo method, using a circuit simulator. A priori, we do not know the probability function, although most common is a Gaussian function, whose expression is well known
being μ the mean and σ² the variance. Analyzing by the simulator, through Monte Carlo method and with 2000 samples, we can get a histogram of resistance value, like it is shown in the next figure
Monte Carlo algorithm introduces a variable whose value corresponds to a Gaussian distribution, but the values it takes are random. If these 2000 samples were taken in five different processes with 400 samples each one, we would still find a Gaussian tendency, but their distribution would be different
Therefore, working properly with the random variables, we can get a complete study of the feasibility of any design and the sensitivity that each variable shows. In the next example, we are going to analyze the bias point of a bipolar transistor, whose β variation is between 100 and 600, being the average value 350 (β is considered a Gaussian distribution), feeding it with resistors with a nominal tolerance of ± 5% and studying the collector current variation using 100 samples.
STATISTICAL ANALYSIS OF A BJT BEHAVIOR IN DC
Now, we are going to study the behavior of a bias circuit, with a bipolar transistor, like the next figure
where the resistors have a ±5% tolerance and the transistor has a β variation between 100 and 600, with a nominal value of 350. Its bias point is Ic=1,8mA, Vce=3,2V. Making a Monte Carlo analysis, with 100 samples, we can get the next result
Seeing the graph form, we can check that the result converges to a Gaussian distribution, being the average value Ic=1,8mA and its tolerance, ±28%. Suppose now that we do the same sweep before processing, in several batches of 100 samples each one. The obtained result is
where we can see that in each batch we get a graph which converges to a Gaussian distribution. In this case, the Gaussian distribution has an average value μ=1,8mA and a variance σ²=7%. Thus, we have been able to analyze each process not only like a global statistical analysis but also like a batch. Suppose now that β is a random variable with an uniform distribution function, between 100 and 600. By analyzing only 100 samples, the next graphic is got
and it can be seen that the current converges to an uniform distribution, increasing the current tolerance range and the probability at the ends. Therefore, we can also study the circuit behaviour when it shows different distribution functions for each variable.
Seeing that, with the Monte Carlo method, we are able to analyze any complex circuit behavior in terms of tolerances, in the same way it will help us to study how we could correct those results. Therefore, in the next posts we shall analyzed deeply this method, increasing the study of its potential and what we can be achieved with it.
CORRECTING THE TOLERANCES
In the simulated circuit, when we have characterized the transistor β like an uniform random variable, we have increased the probability into unwanted current values (at the ends). This is one of the most problematic features, not only on bipolar transistors but also on field effect transistor: the variations of their current ratios. This simple example let see what happens when we use a typical correction circuit for the β variation, like the classic polarization by emitter resistance.
Using this circuit and analyzing by Monte Carlo, we can compare its results with the analysis obtained in the previous case, but using 1000 samples. The result is
where we can check that the probability values have increased around 2mA, reducing the probability density at the low values of current and narrowing the distribution function. Therefore, the Monte Carlo method is a method that not only enables us to analyze the behavior of a circuit when subjected to a statistical, but also allow us to optimize our circuit and adjust it to the desired limit values. Used properly, it is a powerful calculation tool that will improve the knowledge of our circuits.
In this first post, we wish to begin a serie dedicated to Monte Carlo method. In it, we wanted to show the method and its usefulness. As we have seen in the examples, the use of Monte Carlo method provides very useful data, especially with the limitations and variations of the circuit we are analyzing if we know how they are characterized. On the other hand, it allows us to improve it using statistical studies, in addition to setting the standards for the verification of in any production process.
In the next posts we shall go more in depth on the method, by performing a more comprehensive method through the study of a specific circuit of one of my most recent projects, analyzing what the expected results and the different simulations that can be performed using the method of Monte Carlo, like the worst case, the sensitivity, and the post-production optimization.
- Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
- Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid, Alianza Editorial, 2001, ISBN 84-206-8696-4.
- Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/.
In this article, we are going to demonstrate a 900Mhz feedforward amplifier design. Feedforward is a linearization technique for the IM distortion, caused by the nonlinear feature of the active device. Lateral interference, generated by the IM distortion on both sides of the main frequency, affecting the Adjacent Channel. Decreasing this interference is the purpose of this entry.
We are going start with a LDMOS amplifier, tuned to 900Mhz. The active device is a STMicroelectronics’ MOSFET, PD84001, It
operates at 8 V in common source mode at frequencies of up to 1 GHz. POUT is 31dBm (IDQ=50mA) and Drain Efficiency, 60%. Once we have designed the amplifier, we’ll make the linearization of the IM products, using the feedforward technique.
THE MOSFET AMPLIFIER
The amplifier is designed in common source mode. The Operating Point is chosen as the optimal features of the manufacturer: VDS=8V, IDQ=50mA . At this OP, and at 900MHz, ZIN=3,6+j·4,3Ω and ZOUT= 3,9 + j·5,5Ω. Maximum power transfer is obtained with a conjugate matching network at the generator and load impedances, which are Z0=50. Once the matching networks are calculated, the purposed schema for the amplifier is
Amplifier’s gain is 34,3dB, and its phase is 76,4deg. Input and Output Return Losses are respectively 30,7 and 39,8dB. Then, the amplifier is matched and the maximum power at 900MHz is 27dBm, for 1-Tone.
For a 2-Tone input, the IM distortion generates a power drop, caused by the Third Order Distortion. TOI (Third Order Intercept) is 31,7dBm, near of the maximum output power of the datasheet, and it causes the power drop.
Intermodulation products are 12dB below the carrier, and this value may cause interference on the Adjacent Channel. Therefore, we must reduce this value as much as possible, using a linearization technique.
There are many linearization techniques, but we are going to use the feedforward technique, because it is a technique that requires only the use of RF networks.
The amplifier gain, including the second and third order distortions, could be expressed by
Where the input signal Pi is a 2-Tone signal. In this case, we will not take into consideration the second order distortion, since the Pi frequencies will be very close together. A bandpass filter could remove the second order spurious.
The amplifier gain is complex, The coefficients g1 and g3 could be expressed using the polar notation (in mag/phase). Then, these are
These coefficients are going to use to calculate the phase shifter of the first stage. Now, we shall describe shortly the feedforward technique.
The Feedfordward Principle is based on reducing the distortion by mixing in phase opposition with the same distortion. In a RF amplifier, an output distorted signal is generated due to the active device’s nonlinearity. It could be mixed with the input signal in phase opposition, adjusting the levels of both signals. So, we get the distorted signal on one port, and on the other port, only the distortion spurious.
Cancellation of the main signals on the second port is achieved by placing a delay line (τ1), in the secondary network of the first stage. One sample of the signal output of the amplifier (G1) is derived to combine with the secondary network, with a combiner. The levels of both signals are equalized by an inter-stage attenuator (β). Then, both signals are combined. Then, the ouput signal of the amplifier is called MAIN, and the combined signal, AUX.
AUX is now used as an error signal in the second stage, which is amplified by an error amplifier (G2), while the MAIN is delayed with another delay line (τ2). In this second stage, we want to get the same effect than the first stage: put both signals in phase opposition, and combine them. Then, the distortion is cancelled and reduced the interference on the Adjacent Channel.
The level at the output of the amplifier could be written as
and PAUX1 (the sample level before the error combiner) could be expressed by
The level PAUX2 at the secondary network is
In these expressions, β is the magnitude of the losses of the inter-stage attenuator and θβ is its phase; and θA2 is the phase of the delay line τ1. It must be satisfied
θ1 is the phase of the linear gain of the amplifier. Then, not only the phases must be in phase opposition, but also the delay time must be the same in every subnetworks. In magnitude, it must be satisfied |β·g1|=1.
In the second stage, the gain of the amplifier must equalize the g2 and g3 levels, and their phases must satisfy the same equations (absolute phase and delay time) of the first stage, to combine and cancel the distortions.
In RF designs, the adders must be replace by hybrid couplers or directional couplers, which have insertion and coupler losses. Using two hybrid couplers (3dB for insertion losses) to split Pi and combine PAUX1 and PAUX2, and two directional couplers (with C for coupler losses) to take the sample in the first stage and combine the error sample in the second stage, the expressions are now
at the first stage and
at the second stage.
In a narrowband amplifier, delay time could not be considered, because its phase slope will be smallest than the phase slope of a broadband amplifier.
900MHz FEEDFORDWARD AMPLIFIER
Now, we are going to design our feedforward amplifier, based in our two-stage LDMOS amplifier. In first, we must split the input signal in two outputs, one to the amplifier and the other to the phase shifter. We are going to use an 180-deg hybrid coupler, with 3dB of insertion losses. At this frequencies, hybrid couplers could be easily found in the market, as a Surface Mounting Device (SMD). The designed amplifier is a narrowband amplifier.
The output levels of the hybrid coupler are the same, in magnitude and phase. The phase of the amplifier gain was 76,4deg in linear mode, but in nonlinear mode, we have got a phase of 69,4deg, with 0dBm of input power. Taking a sample of the output level of the amplifier with a directional coupler, which introduces a 90deg coupling phase, with 10dB of coupling level, we have got a sample level of 12dBm, with a phase of 159,4deg.
Then, we are going to combine with another hybrid coupler, and as in the secondary network the level is -6dBm, we have to equalize both levels with the attenuator, whose attenuation must be ≈20dB. The phase shifter should be adjusted to a phase of ≈-12 deg.
Adjusting the phase and the level with the phase shifter and the attenuator, we are able to optimize the response for several input levels.
We are going to complete now the second stage amplifier, where an error amplifier increases the level of AUX spurious intermodulation to combine in phase opposition with the MAIN line. The error amplifier G2 should not be a power amplifier, at this stage. A linear, general-purpose amplifier maybe used. The gain is calculated by the difference between the MAIN and AUX IM spurious. This value is 45,5dB, because we are combining with a directional coupler, to reduce the insertion losses in the MAIN line. Using an amplifier with a magnitude of 45,5dB and a phase of -145deg, we have got a phase shifter with the same value, and after the coupler, the IM distortion decreases around 65dB. The output level is now 31dBm, and the TOI increases to 75dBm.
The definitive amplifier is
With the amplifier designed we have achieved a significant improvement: increasing efficiency around 40dB for the same output level, on adjacent channel. Furthermore, the amplifier is very simple to realize with a few RF devices. The design is very easy and intuitive.
However, the Feedforward has two serious disadvantages: on the PCB, it needs a lot of surface; and the input level cannot be increased above the input level that provides maximum output level of the MOSFET, because the distortion can be increased above the value we have corrected.
In broadband we must take into consideration not only the phase of the amplifiers but also the group delay, because the phase slope of the amplifiers has to be compensated by the phase shifter. Then, the phase shifter could have a larger surface dimensions, because it must be a delay line, too.
- R. Cordell, “A MOSFET Power Amplifier with Error Correction”; JAES, vol. 32, nr. 1/2, 1984 Jan/Feb
- J. Vanderkooy, S.P. Lipshitz, “Feed-Forward Error Correction in Power Amplifiers”, JAES, Vol. 28, Nr. 1/2, 1980 Feb
- A.M. Sandman, “Reducing Distortion by ‘Error add-on‘”, Wireless World, vol.79, p.32, 1974 Oct
From today, some interesting entries for readers will be published in English. This first entry will explain the physical reasons because tin whiskers are generated on surfaces of copper or zinz, and methods to prevent this phenomenon.
Whiskers are a small tin wires, growing due to differences in surface tension at the binding surface of the metals, when an electrochemical plating is applied. In 2006, the author and his R&D team had to research this phenomenon. Researchers found the functionality of one product is spoiled along the time. Particularly, when it was stored more than three months. Then, we decided to study this phenomenon, to understand the causes because it produces and find possible solutions to prevent in future developments.
Occurrence of uncontrolled phenomena is important for the Research & Development in any factory. Most of the time, private companies applied more the Development than the Research in their products. However, there are many times where the development has drawbacks and phenomena that are not in the company “know how”. These phenomena enable R&D teams to acquire new knowledge and apply it in the future.
In 2006, my R&D team found a phenomenon affecting the proper operation of one popular product. It was completely unknown to us, but experienced by others: whiskers. It occurred in one important product that we were developing. Because this product was the most important in our catalog, forcing us to do a deeper research, to find a solution, because there were stored material which might be defective. So my R&D team got to work to solve this phenomenon.
Whiskers are a type of mammalian hair, with large size. In Mechanical Engineering, whiskers are tin filaments that grow on a material that has been processing by electroplating. Electroplating is used in the industry, because it serves for fine finishes, easy weldability or protect to corrosion. In our case, electroplating is made with tin and zamak (a zinz, magnesium, aluminum and copper alloy, widely used in industrial housings), to facilitate the weldability on the zamak (it is not weldable), and provide a well-finished product. Therefore, the knowledge of the phenomenon and the possible solutions was very important for us.
TIN WHISKERS ON ZAMAK ALLOY
This phenomenon appeared on Zamak housings, because they must submit a tin plating to weld on the housings, because Zamak does not allow conventional welding.
A drawback happened when, after a large time stored material, this product, a narrow-band amplifier with an 8MHz cavity filter, had a strong deviations in its electrical features. This phenomenon forced to do a new resetting of the filter. In this product, there were two separate settings: first, made during the assembly, and second, 24 hours from the first adjustment. After completing both settings, the cavity filter usually remained stable, but a third setting was recommended if the product was stored over 3 months (storage rotation).
However, along the product development, my R&D team discovered that the cavity filter was not stable and the failure of the selectivity and insertion losses grew along the time. Implying that, despite the third adjustment, could not ensure the filter stability. This failure meant that we could not ensure that the filter was stable in spite of third adjustment.
At first, the phenomenon looked like a failure in the electronic components, caused by a defective lot of capacitors. Then it became a new phenomenon for us: we had accidentally generated whiskers on tin surface.
As I said, whiskers are metal filament-like crystals, which grow on the surface of tin which covers the zamak housing. Crystals are so fine that they are very brittle when the hand is passed over the surface and melt when a short circuit current crosses through them, which it does not have to be very high. In this case, it reduced the cavity volume of the filter, and it changed its resonant frequency, moving the insertion response to higher frequencies.
When we start to study this phenomenon, we discovered that it had been known since the 40’s and even NASA studied deeply the phenomenon, so that part of the way was made: we verified that it was associated with the type of contact surface between the two materials and the thickness applied to the tin plating. Surface tension of the materials and storage temperature were involved, too. In summary, whiskers growth was ruled under the equations of PhD. Irina Boguslavsky and her collaborator Peter Bush:
According to the experimental observations, both equations predicted rather accurately the whiskers growth which was observed in the tin layers. In these equations, σ represents the stress strength, related to the surface tension. LW is related to thickness of the surface bonding and n is a value, dependent on the displacement density and the temperature, T. The k1, k2 and k3 terms are constants which depend on the material properties and RW is the radius of the filament. The h1 and h2 terms refer to the filament growth where it has already happened in the bonding area (h1) and the time at which happens (h2).
In these equations, when LW decreases, h2 increases, because it is an exponential function with n>>1. Therefore, the plating thickness is one of the variables which can be controlled. In our case, this thickness had been decreased from 20μm to 6-8μm because the development incorporated a “F” plug-type, threading, instead of the former 9 ½mm DIN connector. Since the connectors were made in the molding manufacture and they were subsequently threaded, they were made before the tin plating. A 20μm tin plating did not allow that the connectors were threaded.
The σ term is related to the surface tension in the junction and depends only on the materials used. Studying with the plating manufacturer for different thicknesses, we verified that the expressions were consistent, since for larger thicknesses, the growth was always much higher than for smaller thicknesses, and there was always a tendency to grow, although it was lower in 20μm platings. Once the plating was made, the stress forces which were applied by the surface tension of zamak, “pushed” to the tin atoms outwards, to maintain the equilibrium conditions. Opposite to them, the surface tension of tin appeared. With less tin thickness, the forces applied at the contact surface were higher than the opposite forces on the tin surface, and with less tin thickness, internal forces which opposed to the surface forces were weaker, allowing the whisker growth outside.
POSSIBLE SOLUTIONS TO WHISKERS GROWTH
One solution, that was provided from Lucent Technologies, was performing an intermediate nickel plating, between zinz surface and the tin plating.
Researchers from Lucent Tech., after several experiments, found that the growth of whiskers was eliminated significantly, to nearly zero values.
In the graphs, we can see that the growth of bright tin over a copper surface which has a similar performance to the zamak. It grows rapidly after 2 months. The growth slope is very high in bright tin. However, when it is applied an intermediate Ni layer, the growth is practically zero. For the satin tin, the growth slope happens after 4 months, and it shows a slightly lower slope. After applying the Ni layer, the growth is practically zero.
The thickness of the Ni plating could be between 1μm and 2μm, while the thickness of tin plating could be maintained around 8μm. Thus, the defective threading is avoided while the whiskers were removed. However, the process was quite expensive, so this option was discarded.
Therefore, we were confronting another problem: how to eliminate the phenomenon, which implied increasing the thickness of the tin plating on the zamak alloy, but also caused the defective in threading at “F” connector. A molding modification, to provide more material on the connector, was quite expensive and involved a larger modification time, having to include inserts. However, it was right to correct the whiskers.
Another problem was raised with the stored material and the material which was being manufactured. The stored material could not be reprocessed because it had been assembled and could not be plated again. The intermediate solution was to remove the tin crystals by cleaning with compressed air.
About the material in manufacturing process (non-plating pieces), a temporary solution was to replace the tin plating. Silver plating was applied. Silver is weldable and can be applied in very thin layers, keeping the features, but it has the disadvantage that its oxide shows a dirty and stained finish, affecting to the product esthetics.
Finally, in-depth study of the phenomenon laid down that increasing the tin thickness should be standard. Defective in threading should be removed by a tool, to make correctly the threading on the connector, and the mold modification could be made, by modifying the inserts of the threaded connectors, to get a 10-20μm plating which do not fill the threadings.
Tin whiskers is a little-known phenomenon. It happens at the microscopic level and seems to have only been studied by agencies and national research laboratories, with strong budgets and appropriate means for its observation.
In Spain, we have found few laboratories which study it. It happens preferably in the industry, caused by the handling of materials. This job was done by my R&D team, allowing us to acquire enough knowledge to correct and prevent it, as well as to avoid its occurrence again.
However, there are many items about it on the web, which allowed us to know, analyze its causes and possible solutions.
- H. Livingston, “GEB-0002: Reducing the Risk of Tin Whisker-Induced Failures in Electronic Equipment”; GEIA Engineering Bulletin, GEIA-GEB-0002, 2003
- B. D. Dunn, “Whisker formation on electronic materials”, Circuit World, vol. 2, no. 4, pp.32 -40 1976
- R. Diehl, “Significant characteristics of Tin and Tin-lead contact electrodeposits for electronic connectors”, Metal Finish, pp.37-42 1993
- D. Pinsky and E. Lambert, “Tin whisker risk mitigation for high-reliability systems integrators and designers”, Proc. 5th Int. Conf. Lead Free Electronic Components and Assemblies, 2004
- Chen Xu, Yun Zhang, C. Fan and J. Abys, “Understanding Whisker Phenomenon: Driving Force for Whisker Formation”, Proceedings of IPC/SMEMA Council APEX, 2002
- I. Boguslavsky and P. Bush, “Recrystallization Principles Applied to Whisker Growth in Tin”, Proceedings of IPC/SMEMA Council APEX, 2003