# Simulation on Physical Systems

I take a long time writing many post about the simulation. Main reason is because I have learned for many years the value of using computers for physical system analysis. Without these tools, I would never be able to get reliable results, because of the amount of calculations I would have to do. Modern simulators, able to solve complex calculations using the computers capacity, allow us to get a more realistic behavior for a complex system, knowing its structures. Physics and Engineering work every day with simulations to get better predictions and take decisions. In this post, I am going to show what are the most important parts we should be kept in mind about the simulation.

In 1982, physicist Richard Feynman published an article where he talked about the analysis of physical systems using computers (1). In those years, computer technology had progressed to a high level that it was possible to achieve a greater calculation capacity. New programming languages worked with complex formulas, such as FORTRAN, and allowed the calculations on systems by complex integro-differential equations, which resolution usually needed numerical methods. So, in those first years, physicists began to do simulations with programs able to solve the constitutive system equations, although not always with simple descriptions.

A great step forward in electronics was the SPICE program, at the beginning of 70s (2). This program, FORTRAN-based, was able to compute non-linear electronic circuits, removing the radiation effects, and solve the time-domain integral-differential equations. Over the years, the Berkeley’s SPICE became the first reference on simulation programs and its success being such that almost all the simulation programs developed along last years have its base on the Nagel and Pederson algorithms, developed in 70s.

From 80s, and searching to solve three-dimensional problems, the method of moments (MoM) was developed. It was come to solve systems raised as integral equations in the boundaries (3), being very popular. It was used in Fluid Mechanics, Acoustic Waves and Electromagnetism. Today, this one is still used to solve two-dimensional electromagnetic structures.

But the algorithms have got a huge progress, with the emergence of new finite element methods (FEM, frequency-domain) and time-domain finite differences (FDTD, time-domain) in 90s, based on the resolution of systems formulated by differential equations, important benchmarks on the generation of new algorithms able to solve complex systems (4). And with these new advances, the simulation contribution in Physics came to take spectacular dimensions.

WHAT IS THE VALUE OF AN ACCURATE MODEL?

When we are studying any physical phenomenon, we usually invoke a model. Whether an isolated phenomenon or within an environment, whether in Acoustic Waves, Electromagnetism or Quantum Mechanics, having a well-characterized model is essential to get its behavior, in terms of its variables. Using an accurate model increases our certainty on the results.

However, modeling is complex. It is needed to know what are the relationships between variables and from here, determine a formulation system that defines the behavior within a computer.

A model example is a piezoelectric material. In Electronics, piezoelectric materials are commonly used as resonators and it is usually to see these electronic devices (quartz or any other resonant material based on this property).

A piezoelectric model, very successful in the 40s, was developed by Mason (5). Thanks to the similarity between the Electromagnetic and Acoustic waves, he got to join both properties using transmission lines, based in the telegraphist’s equations, writing the constitutive equations. In this way, he developed a piezoelectric model which is still used today. This model can be seen in Fig. 1 and it has already been studied in previous posts.

Fig.1 – Modelo de piezoeléctrico de Mason

This model practically solved the small signal analysis in frequency domain, getting an impedance resonance trace as it is shown in Fig. 2

Fig.2 – Resultados del análisis del modelo de Mason

However, the models need to expand their predictive capacity.

The Mason model describes the piezoelectric behavior rightly when we are working in a linear mode. But it has faults when we need to know the large signal behavior. So new advances in the piezoelectric material studies included the non-linear relationships in its constitutive equations (6).

Fig. 3 – Modelo tridimensional de una inducción

In three-dimensional models, we must know well what are the characteristics that define the materials to have an optimal results. In the induction shown in Fig. 3, CoFeHfO is being used as a magnetic material. It has a frequency-dependent complex magnetic permeability that must be defined in the libraries.

The results will be better as the model is defined better, and this is the fundamental Physicist task: getting a reliable model from the studies on the phenomena and the materials.

The way to extract a model is usually done by direct measurement or through the derived magnitudes, using equations systems. With a right model definition, the simulation results will be more reliable.

ANALYSIS USING SIMULATION

Once the model is rightly defined, we can perform an analysis by simulation. In this case, we will study the H-field inside the inductor, at 200 MHz, using the FEM analysis, and we are going to draw this one, being shown in Fig. 4.

Fig. 4 – Excitación magnética en el interior del inductor

The result is drawn in a vector mode, since we have chosen that representation to see the H-field direction inside the inductor. We can verify, first, that the maximum H-field is inside the inductor, to the positive section on Y axis in the upper area, while in the lower part the orientation the inverse. The maximum H-field level obtained is 2330 A/m with 1 W excitation between the inductor electrodes.

The behavior is precisely that of an induction whose value can also be estimated by calculating its impedance and drawiing it on Smith’s chart, Fig. 5.

Fig. 5 – Impedancia del inductor sobre carta de Smith

The Smith’s chart trace clearly shows an inductive impedance, which value decreases when the frequency increases, because of losses of the CoFeHfO magnetic material. Besides, these losses contribute to the resistance increasing with frequency. There will be a maximum Q in the useful band

Fig. 6 – Factor de calidad del inductor

Having a induction with losses a quality factor Q, we can draw it as a function of the frequency in Fig. 6.

Therefore, with the FEM simulation we have been able to analyze the physical parameters on a modeled structure that would have cost us much more time and effort to get by means of complex calculations and equations. This shows, as Feynman pointed out in that 1982 conference, the simulation powerful when there are accurate models and proper software to perform these analyzes.

However, the simulation has not always had the chance to get the best results. Precisely is the previous step, the importance of having an accurate model, which faithfully defines the physical behavior of any structure, which will ensure the reliability of the results.

EXPERIMENTAL RESULTS

The best way to check if the simulation is valid is to resort getting experimental results. Fortunately, the simulation performed on the previous inductor is got from (7), and, in this reference, the authors show experimental results that validate the results of the inductor model. In Fig. 7 and 8 we can see the inductance and resistance values, and adding the quality factor, can be compared with the experimental results of the authors.

Fig. 7 – Valor de la inductancia en función de la frecuencia

Fig. 8 – Valor de la resistencia efectiva en función de la frecuencia

The results obtained by the authors, using HFSS for the simulation of the inductor, can be seen in Fig. 9. The authors have done the simulation on the structure with and without core, and show the simulation against the experimental result . Seeing the graphs, it can be concluded that the results got in the simulation have a high level of concordance with those obtained through the experimental measurements.

This shows us that the simulation is effective when the model is reliable, and that a model is accurate when the results obtained through the simulation converge with the experimental results. In this way, we have a powerful analysis tool that will allow us to know in advance the behavior of a structure and make decisions before moving on to the prototyping process.

In any case, convergence is also important in a simulation. The FEM simulation needs that the mesh is so accurate as getting a good convergence. A low convergence level gives results far from the optimum, and very complex structures require a lot of processing speed, a high RAM use and, sometimes, must even perform a simulation on several processors. To more complex structures, the simulation time increases considerably, and that is one of its main disadvantages.

Although the FEM simulators allow the optimization of the values ​​and even today the integration with other simulators, they are still simulators that require, due to the complexity of the calculations to be carried out, powerful computers that allow to make those calculations with reliability.

CONCLUSIONS

Once again, we agree with Feynman when, in that 1982 seminar, he chose precisely a topic which seemed to have no interest for the audience. Since that publication, Feynman’s article has become a classic of Physics publications. The experience that I have got over the years with several simulators, shows me that the way opened by them will have a considerable advance when quantum computers are a reality and their processing speed raises, allowing that these tools get reliable results in a short space of time.

The simulation in the physical systems has been an important progress to get results without needing to realize previous prototypes and supposes an important saving in the research and development costs.

REFERENCES

1. Feynman, R; “Simulating Physics with Computers”; International Journal of Theoretical Physics, 1982, Vols. 21, Issue 6-7, pp. 467-488, DOI: 10.1007/BF02650179.
2. Nagel, Laurence W. and Pederson, D.O. “SPICE (Simulation Program with Integrated Circuit Emphasis)”, EECS Department, University of California, Berkeley, 1973, UCB/ERL M382.
3. Gibson, Walton C., “The Method of Moments in Electromagnetics”, Segunda Edición, CRC Press, 2014, ISBN: 978-1-4822-3579-1.
4. Reddy, J.N, “An Introduction to the Finite Element Method”, Segunda Edición,  McGraw-Hill, 1993, ISBN: 0-07-051355-4.
5. Mason, Warren P., “Electromechanical Transducers and Wave Filters”, Segunda Edición, Van Nostrand Reinhold Inc., 1942, ISBN: 978-0-4420-5164-8.
6. Dong, S. Shim and Feld, David A., “A General Nonlinear Mason Model of Arbitrary Nonlinearities in a Piezoelectric Film”, IEEE International Ultrasonics Symposium Proceedings, 2010, pp. 295-300.
7. Li, LiangLiang, et al. 4, “Small-Resistance and High-Quality-Factor Magnetic Integrated Inductors on PCB”, IEEE Transactions on Advanced Packaging, Vol. 32, pp. 780-787, November 2009, DOI: 10.1109/TADVP.2009.2019845.
Anuncios

# Studying slotline transmission lines

PCB transmission lines are an optimal and low cost solution to make guided propagation at very high frequencies. The most popular lines are microstrip and coplanar waveguide. These transmission lines are easily realizable in a printed circuit board and whose impedance can be calculated from their dimensions. In these lines, TEM modes (transverse electromagnetic) are propagated, in which there is no component in the direction of propagation. However, there are other very popular lines that can also be used at high frequencies and are known as slotlines. In this post, we are going to study the electrical behavior of slotlines and some microwave circuits that can be done with them.

At high frequencies, lines usually behave like distributed transmission lines. Therefore, it is necessary to know its impedance so that there are no losses during propagation.

The microstrip and coplanar waveguides are very popular, since they are easily implemented on a printed circuit board, they are cheap and can be easily calculated. In both lines, the propagation mode is TEM, there are no field components in the direction of propagation, and their characteristic impedance Zc and wavelength λg depend on the line dimensions and the dielectric substrate which supports them.

There is another type of line, which is usually used at very high frequencies: the slotline. This line is one slot on the copper plane through which a transverse electric mode is propagated (specifically the TE01 mode, as shown in the following figure).

Fig. 1 –  TE01 mode on a slotline

The field is confined near the slot so that the propagation has the minimum possible losses, and as the microstrip lines, there is a discontinuity due to the dielectric substrate and air. It is used as a transmission line with substrates with a high dielectric constant (around εr≥9.2), in order to confine the fields as close as possible to the slot, although they can be used as couplings on substrates with lower dielectric constants. In this way, flat antennas can be fed with the slotlines.

In this post, we will pay attention to its use as transmission lines (with high dielectric constants), and the microwave circuits that we can make with them, studying the transitions between both technologies (slotline to microstrip).

ANALYZING THE SLOTLINE TRANSMISSION LINE

Being a transmission line and like the other lines, the slotline has a characteristic impedance Zc and a wavelength λs. But besides, using the TE01 propagation mode, the electric field component which is propagated, in cylindrical coordinates, is Eφ, as it is shown in the next figure

Fig. 2 – Eφ component

This component is calculated from the magnetic components Hr and Hz, considering the Z-axis the propagation direction, which is perpendicular to the electric field. From here, we get an expression for the propagation constant kc which is

$E_{\varphi}=\dfrac {j{\omega}{\mu_0}}{k_c^2}\dfrac {\partial H_z}{\partial r}=-{\eta} \dfrac {\lambda_s}{\lambda_0}H_r$

$k_c=\dfrac {2{\pi}}{\lambda_0} \sqrt {1- \left( \dfrac {\lambda_0}{\lambda_s} \right)^2}$

where λ0 is the wavelength of the propagated field. The first thing is deduced from the expression of kc is that we will find a cuttoff wavelength λs, from which the field propagates as mode TE01, since λ0≤λs so that kc is real and there is propagation. This means that there will be a cuttoff thickness for the substrate which will depend on the dielectric constant εr. The expression for that cuttoff thickness, where there is no propagation at TE01 mode, is

${\left( \dfrac {h}{\lambda_0} \right)}_c=\dfrac {1}{4\sqrt{{\epsilon_r}-1}}$

With these expressions, Gupta (see [1], page 283) got the expressions for the line impedance Zc and the line wavelength λs, which will allow us to typify the transmission line, making microwave circuits with slotlines.

ANALYZING A SLOTLINE

As the microstrip and coplanar waveguides, slotline can be analyzed using a FEM electromagnetic simulator. We are going to study one transmission line on an RT/Duroid 6010 substrate, which dielectric constant is εr=10,8, with 0,5mm thickness. The slot width is 5mil. According to the impedance calculations, Zc is 68,4Ω and λs, 14,6mm, at 10GHz. In a 3D view, the slotline is the next

Fig. 5 – Slotline 3D view

The next graph shows the S parameters at 50Ω impedance of generator and load.

Fig. 6 – Slotline S parameters

On the Smith chart

Fig. 7 – Slotline impedance on Smith Chart

where the impedance is 36,8-j·24,4Ω at 10GHz.

It is possible to show the propagated surface current on the line in 3D view

Fig. 8 – Slot surface current, in A/m

where it can be seen that the surface current is confined as near as possible the slot. From this current, the H-field can be derived and therefore the E-field which only has a transversal component. It can be also seen two maxima on the current magnitude, which shows that the slot distance is λs.

The FEM simulation allows us to analyze the slotline lines and build microwave circuits, knowing the characterization shown in [1].

SLOTLINE-TO-MICROSTRIP TRANSITIONS

Like the slotline is one slot made on a copper plane, transitions can be made from slotline to microstrip. One typical transition is the next

Fig. 9 – Slotline-to-microstrip transitions

Microstrip lines finish in a λm/4 open circuit stub, so the current is minimal at the open circuit and maximum at the transition location. In the same way, the slotline finishes in a λs/4 short circuit stub, with the minimum surface current at the transition location. Then, the equivalent circuit for each transition is

Fig. 10 -Equivalent circuit for a slotline-to-microstrip transition

Using the FEM simulator it is possible to study how a transition behaves. The next graph shows its S parameters. The transition has been made on RT/Duroid 6010, with 70mil thickness and 25mil slot widths. The microstrip width is 50mil and the working band is 0,7÷2,7GHz.

Fig. 11 – Transition S parameters

and showing the surface current on the transition, it ts the next

Fig. 12 – Current on the transition.

where it can be seen the coupling of the current and its distribution on the slotline.

ANOTHER MICROWAVE CIRCUITS BASED ON SLOTLINES

The slotline is a versatile line. Combined with microstrip (the microstrip ground plane can include slots), it allows us to make a series of interesting circuits, such as those shown in fig. 13

Fig. 13 – Microwave circuits with slotline and microstrip.

The 13 (a) circuit shows a balum with slotline and microstrip technology, where the microstrip is shorted to ground in the transition. The balanced part is the slotline section, since both ground planes are working like differential ports, while the unbalanced part is the microstrip, referring to the ground plane where the slots are placed. With this circuit it is possible to build frequency mixers or balanced mixers. Another interesting circuit is shown in 13 (b), a “rat-race” where the microstrip circuit is not closed, but is coupled through a slot to get the coupling. In 13 (c), a “branchline” coupler is shown, using a slotline and, finally, in 13 (d), a Ronde coupler is shown. This last circuit is ideal to equalize the odd/even mode phase velocities.

CONCLUSIONS

In this post, we have analyzed the slotline used like a microwave transmission line, compared with another technologie. Besides we have made a small behavior analysis using an FEM simulator, checking the possibilities of the line analysis (S parameters and surface current analysis) and we have shown some circuits that can be made with this technology, verifying the versatility of this transmission line.

REFERENCES

1. Gupta, K.C., et al. “Microstrip Lines and Slotlines”. 2nd. s.l. : Artech House, Inc, 1996. ISBN 0-89006-766-X.

# Simulating transitions with waveguides

Waveguides are transmission lines widely used in very high frequency applications as guided propagation devices. Their main advantages are the reduction of losses in the propagation, due to the use of a single conductor and air, instead of using dielectrics as in the coaxial cable, a greater capacity to use high power and a simple building. Their main drawbacks are usually that they are bulky devices, that they cannot operate below their cutoff frequency and that the guide transitions to other technologies (such as coaxial or microstrip) have often losses. However, finite element method (FEM) simulation allows us to study and optimize the transitions that can be built with these devices, getting very good results. In this post we will study the waveguides using an FEM simulator such as HFSS, which is able to analyze tridimensional electromagnetic fields (3D simulation).

Waveguides are very popular in very high frequency circuits, due to the ease of their building and their low losses. The propagated fields, unlike the coaxial guides, are electric or magnetic transverse (TE or TM fields), so they have a magnetic field component (TE) or electric field (TM) in the propagation direction. These fields are the solutions for the Helmholtz equation under certain boundary conditions

• For the TE modes, Ez(x,y)=0

$\left( \dfrac {{\partial}^2}{\partial x^2} +\dfrac {{\partial}^2}{\partial y^2} +k_c^2\right)H_z(x,y)=0$

• For the TM modes, Hz(x,y)=0

$\left( \dfrac {{\partial}^2}{\partial x^2} +\dfrac {{\partial}^2}{\partial y^2} +k_c^2\right)E_z(x,y)=0$

and solving these differential equations by separation of variables, and applying the boundary conditions of a rectangular enclosure, where all the walls are electrical walls (conductors, in which the tangential component of the electric field is canceled)

Fig. 2 – Boundary conditions on a rectangular waveguide

we can obtain a set of solutions for the electromagnetic field inside the guide, starting from the solution obtained for the expressions shown in fig. 1.

Fig. 3 – Table of electromagnetic fields and parameters in rectangular waveguides

Therefore, electromagnetic fields are propagated like propagation modes, called TEmn, for the transverse electric (Ez=0), or TMmn, for the transverse magnetic (Hz=0). From the propagation constant Kc is got an expression for the cutoff frequencyfc, which is the lowest frequency for propagating fields inside the waveguide, which expression is

$f_c=\dfrac {c}{2} \sqrt {\left( \dfrac {m}{a} \right) ^2+\left( \dfrac {n}{b} \right) ^2}$

The lowest mode is when m=0, since although the function has extremes for m,n=0, the modes TE00 or TM00 do not exist. And like a>b, the lowest cutoff frequency of the waveguide is for the mode TE10. That is the mode we are going to analyze using a 3D FEM simulation.

SIMULATION OF A RECTANGULAR WAVEGUIDE BY THE FINITE ELEMENTS METHOD

In a 3D simulator it is very easy to model a rectangular waveguide, since it is enough to draw a rectangular prism with the appropriate dimensions a and b. In this case, a=3,10mm and b=1,55mm. The TE10 mode start to propagate at 48GHz the next mode, TE01, at 97GHz, then the waveguide is analyzed at 76GHz, frequency in which it will work. Drawing the waveguide in HFSS, it is shown so

Fig. 5 – Rectangular waveguide. HFSS model

The inner rectangular prism is assigned to vacuum, and the side faces are assigned perfect E boundaries. Two wave ports are assigned on the rectangles at -z/2 and +z/2 , using the first propagation mode. The next figure shows the E-field along the waveguide

Fig. 6 – Electric field inside the waveguide

Analyzing the Scattering parameters from 40 to 90GHz, it is got

Fig. 7 – S parameters for the rectangular waveguide

where it can be seen that the first mode starts to propagate inside the waveguide at 48,5GHz.

From 97GHz, TE01 mode could be propagated too, it does not interest us, then the analysis is done at 76GHz.

WAVEGUIDE TRANSITIONS

The most common transitions are from waveguide to coaxial, or from waveguide to microstrip line, to be able to use the propagated energy in another kind of applications. For this, a probe is placed in the direction of the E-field, coupling its energy on the probe. (TE01 mode is in Y-axis)

Fig. 8 – Probe location

The probe is a quarter wavelength resonant antenna at the desired frequency. In X-axis, E-field maximum value happens at x=a/2, while to find the maximum in Z-axis, the guide is finished in a short circuit. So, E-field is null on the guide wall, being maximum at a quarter guide wavelength which is

${\lambda_g}=\dfrac {\lambda}{\sqrt {1-\left( \dfrac {f_c}{f} \right)^2}}$

and in our case, at 76GHz, λ is 3,95mm and λg, 5,11mm. Then, the probe length will be 0,99mm and the shortcircuit distance, 2,56mm.

In coaxial transitions, it is enough to put a coax whose internal conductor protrude λ/4 at λg/4 from the shortcircuit. But in microstrip transitions dielectrics are used as support of the conductor lines, then it should be kept in mindpor the dielectric effect, too.

Our transition can be modeled in HFSS by assigning different materials. The probe is built on Rogers RO3003 substrate, with low dielectric constant and losses, making the transition to microstrip. The lateral faces and the lines are assigned to perfect E boundaries, and form of the substrate, to a RO3003 material. The waveguide inside and the transition cavity is assigned to vacuum. In the extreme face of the transition, a wave port is assigned.

Fig. 10 – Rectangular waveguide to microstrip transition

Now, the simulation is done analyzing the fields and S parameters.

Fig. 11 – E-field on the transition

and it can be seen how the E-field couples to the probe and the signal is propagated along the microstrip.

Fig. 12 – Transition S parameters

Seeing the S parameters, we can see that the least loss coupling happens at 76÷78GHz, our working frequency.

OTHER DEVICES IN WAVEGUIDES: THE MAGIC TEE

Among the usual waveguide devices, one of the most popular is the Magic Tee, a special combiner which can be used like a divider, a combiner and a signal adder/subtractor.

Fig. 13 – Magic Tee

Its behavior is very simple: when an EM field is fed by port 2, the signal is divided and in phase by ports 1 and 3. Port 4 is isolated because its E-plane is perpendicular to the port 2 E-plane. But if the EM field is fed by port 4, it is divided into ports 1 and 3 in phase opposition (180deg) while port 2 is now isolated.

Using the FEM simulation to analyze the Magic Tee, and feeding the power through port 2, it is got the next response

Fig. 14 – E-field inside the Magic Tee feeding by the port 2.

and the power is splitted in ports 1 and 3 while port 4 is isolated. Doing the same operation from port 4, it is got

Fig. 15 – E-field inside the Magic Tee feeding by the port 4.

where now port 2 is isolated.

To see the phases, it is used a vector plot of the E-field

Fig. 16 – Vector E-field inside the Magic Tee feeding by the port 2

where it is seen that the field in ports 1 and 3 has the same direction and therefore they are in phase. Feeding from port 4

Fig. 17 – Vector E-field inside the Magic Tee feeding by the port 2

in which it is seen that the signals in port 1 and 3 has the same level, but in phase opposition (180deg between them).

FEM simulation allows us to analyze the behavior of the EM field from different points of view, only changing the excitations. For example, feeding a signal in phase by port 2 and 4, both signals will be added in phase at port3 and will be nulled at port 1.

Fig. 18 – E-field inside the feeding by ports 2 and 4 in phase.

whereas if inverting the phase in port 2 or port 4, the signals will be added at port 1 and will be nulled at port 3.

Fig. 19 – E-field inside the feeding by ports 2 and 4 in phase opposition

and the result is a signal adder/subtractor.

CONCLUSIONS

The object of this post was the analysis of the electrical behavior of the waveguides using a 3D FEM simulator. The advantage of using these simulators is that they allow to analyze with good precision the EM fields on three-dimensional structures, being the modeling the most important part to rightly define the structure to be studied, since a 3D simulator requires meshing in the structure, and this meshing, as it needs a high number of tetrahedra to achieve good convergence, also tends to need more machine memory and processing capacity.
The structures analyzed, due to their simplicity, have not required long simulation time and relevant processing capacity, but as the models become more complex, the processing capacity increases, it it is needed to achieve a good accuracy.

In subsequent posts, another methods to reduce modeling in complex structures will be analyzed, through the use of planes of symmetry that allow us to divide the structure and reduce meshing considerably..

REFERENCES

1. Daniel G. Swanson, Jr.,Wolfgang J. R. Hoefer; “Microwave Circuit Modeling Using Electromagnetic Field Simulation”; Artech House, 2003, ISBN 1-58053-308-6
2. Paul Wade, “Rectangular Waveguide to Coax Transition Design”, QEX, Nov/Dec 2006

# Using the Three-Dimensional Smith Chart

The Smith Chart is a standard tool in RF design. Developed by Phillip Smith in 1939, it has become the most popular graphic method for representing impedances and solving operations with complex numbers. Traditionally, the Smith Chart has been used as 2-D polar form, centered at an unit radius circle. However, the 2D format has some restrictions when the active impedances (oscillators) or stability circles (amplifiers) are represented, since these ones usually leave the polar chart. Last years, three-dimensional Smith Chart has become popular. Advances in 3D rendering software make it easy to use for design. In this post, I will try to show the handling of the three-dimensional Smith Chart and its application for a low-noise amplifier design.

When Phillip Smith was working at Bell Labs, he have to match one antenna and he was looked for a way to solve the design graphically. By means of the mathematical expressions that define the impedances in the transmission lines, he got to represent the impedance complex plane by circles with constant resistances and reactances. These circles made it easier for him to represent any impedance in a polar space, with the maximum matching placed in the center of the chart and the outer circle representing the pure reactance. Traditionally, Smith’s Chart has been represented in polar form as shown below

Fig. 1 – Traditional Smith’s Chart

The impedance is normalized calculating the ratio between the impedance and the generator impedance. The center of the chart is pure unit resistance (maximum matching) while the peripheral circle that limits the chart is the pure reactance. The left end of the chart represents the pure short circuit and the right end, the pure open circuit. The chart was then very popular to be able to perform calculations for matching networks with transmission lines using a graphical method. However, the design difficulties with the chart happened when active impedances were analyzed, studying amplifiers stability and designing oscillators.

By its design, the chart is limited to the impedances with positive real part, but it could represent, extending the complex plane through the Möbius transformation, impedances with negative real part [1]. This expanded chart, to the negative real part plane, can be seen in the following figure

Fig. 2- Smith’s Chart expanded to active impedances

However,this chart shows two issues: 1) although it allows to represent all the impedances, there is a problem with the complex infinity, so it remains limited and 2) the chart has large dimensions that make it difficult to us in a graphic environment, even in a computer-aided environment. However, the extension is needed when the amplifier stability circles are analyzing, since in most of cases the centers of these circles are located outside the passive impedance chart.

In a graphical computer environment, representing the circles is already performed by the software itself through the calculations, being able to limit the chart to the passive region and drawing only a part of the circle of stability. But with oscillators still have the problem of complex infinity, which could be solved through a representation in a Riemann’s sphere.

RIEMANN’S SPHERE

The Riemann’s sphere is a mathematical solution for representing the complete complex plane, including infinity. The entire complex surface is represented on a spherical surface by a stereographic projection of this plane.

Fig. 3 – Projection of the complex plane on a sphere

In this graphic form the southern hemisphere represents the origin, the northern hemisphere represents infinity and the equator the circle of unitary radius. The distribution of complex values in the sphere can be seen in the following figure

Fig. 4 – Distribution of complex values in the sphere

So, it is possible to represent any complex number on a surface easy to handle.

SMITH’S CHART ON A RIEMANN’S SPHERE

Since Smith’s Chart is a complex representation, it can be projected in the same way to a Riemann’s sphere [2], as shown in the following figure

Fig. 5 – Projection of the Smith’s Chart on a Riemann’s sphere

In this case, the northern hemisphere shows the impedances with positive resistance (passive impedances), in the southern hemisphere, the impedances with negative resistance (active impedances), in the eastern hemisphere, the inductive impedances, and in the western one the capacitive impedances. The main meridian shows the pure resistive impedance.

Thus, when we wish to represent any impedance, either active or passive, it can be represented at any point in the sphere, greatly facilitating its drawing. In the same way, we can represent the stability circles of any amplifier without having to expand the chart. For example, if we want to represent the stability circles for one transistor, which parameters S at 3GHz are the next

S11=0,82/-69,5   S21=5,66/113,8   S12=0,03/48,8  S22=0,72/-37,6

its representation in the conventional Smith’s Chart is

Fig. 6 – Traditional representation for stability circles

while in the three-dimensional chart it is

Fig. 7 – Stability circles on the 3D chart

where both circles can be seen, a fraction in the northern hemisphere and the other one in the south. Thus, its representation has been greatly facilitated.

A PRACTICAL APPLICATION: LOW NOISE AMPLIFIER

Let’s see a practical application of the 3D chart matching the previous amplifier with the maximum stable gain and minimum figure of noise, at 3GHz. Using traditional methods, and knowing the transistor parameters which are the next

S11=0,82/-69,5   S21=5,66/113,8   S12=0,03/48,8  S22=0,72/-37,6

NFmin=0,62  Γopt=0,5/67,5 Rn=0,2

S-parameters are represented in the3D Smith’s chart and the stability circles are drawn. For a better representation 3 frequencies are used, with a 500MHz bandwidth.

Fig. 8 – S-parameters and stability circles for the transistor (S11 S21 S12 S22 Input Stability Circle Output Stability Circle)

It can be seen that S-parameters as well as the stability circles in both the conventional Smith’s chart and 3D one. In the conventional Smith’s chart, the stability circles leave the chart.

One amplifier is unconditionally stable when the stability circles are placed in the active impedance area of the chart, in the southern hemisphere, under two conditions: if the circles are placed in the active region and do not surround the passive one, the unstable impedances are located inside the circle. If the circles surround the passive region, the unstable impedances are located outside the circle.

.

Fig. 9 – Possible cases for stability circles in the active region

In this case, since part of the circles enters on the passive impedances region, the amplifier is conditionally stable.Then the impedances that could unstabilize the amplifier are placed inside the circles. This is something that cannot be seen clearly in the three-dimensional chart yet, the app does not seem to calculate it and would be interesting to include in later versions, because it would greatly facilitate the design.

Let’s match now the input for the minimum noise. For this, it is needed to design a matching network to transform from 50Ω to reflection coefficient Γopt, being its normalized impedance Zopt=0,86+j⋅1,07. In the app, opening the design window and writing this impedance

Fig. 10 – Representation of Γopt

Using now the admittance, we translate in the circle of constant conductance until the real part of the impedance is 1. This is down by estimation and a 0,5 subsceptance is got. It should be increased 0,5 – (- 0,57) = 1.07 and this is a shunt capacitor, 1,14pF.

Fig. 11 – Translating to circle with real part 1.

Now it is only needed to put a component that makes zero the reactance, when the resistance is constant. As the reactance is -1.09, the added value should be 1.09, so that the reactance is zero. This is equivalent to a series inductor, 2,9nH.

Fig. 12 – Source impedance matched to Γopt

Once calculated the input matching network for the lower noise figure, we recalculate the S-parameters. Being an active device, the matching network transforms the S parameters, which are:

S11=0,54/-177   S21=8,3/61,1   S12=0,04/-3,9  S22=0,72/-48,6

and which are represented in the Smith’s chart to get the stability circles.

Fig. 13 – Transistor with matching network to Γopt and stability circles.

The unstable regions are the internal regions, so the amplifier remains stable.

Now the output matching network is got for maximum stable gain, and the ouput reflection coefficient S22=0,72/-48,6 should be loaded by ΓL (S22  conjugate), translating from 50Ω to ΓL=0,72/48,6. This operation is performed in the same way that input matching network. By doing the complete matching , S parameters are recalculated, with input and oputput matching networks. These are

S11=0,83/145   S21=12/-7.5   S12=0,06/-72,5  S22=0,005/162

The gain is 20·log(S21)=21,6dB, and the noise figure, 0,62dB (NFmin). Now it is only represented these parameters in the three-dimensional chart to get the stability circles.

Fig. 14 – Low noise amplifier and stability circles

In this case, the stable region in the input stability circle is inside and in the otuput stabiliy circle is outside. Due to both reflection coefficients, S11 y S22 are into the stable regions, then the amplifier is stable.

CONCLUSIONS

In this entry I had the first contact with the three-dimensional Smith’s chart. The object was to study its potential with respect the traditional chart in microwave engineering. New advantages are observed in this respect in that it is possible to represent the infinite values ​​from the Möbius transform to a Riemann’s sphere and thus having a three-dimensional graphical tool where practically all passive and active impedances and parameters which can be difficult to draw in the traditional chart as stability circles.

In its version 1, the app, which can be found on the website 3D Smith Chart / A New Vision in Microwave Analysis and Design, shows some design options and configurations, although some applications should be undoubtedly added In future versions. In this case, one of the most advantageous applications for the chart, having studied the stability circles of an amplifier, is the location of the stability regions graphically. Although this can be solved by calculation, the visual image is always more advantageous.

The app has a user manual with examples explained in a simple way, so that the designer becomes familiar with it immediately. In my professional opinion, it is an ideal tool for those of us who are used to using Smith’s chart to perform our matching network calculations.

REFERENCES

1. Müller, Andrei; Dascalu, Dan C; Soto, Pablo; Boria, Vicente E.; ” The 3D Smith Chart and Its Practical Applications”; Microwave Journal, vol. 5, no. 7, pp. 64–74, Jul. 2012
2. Andrei A. Muller, P. Soto, D. Dascalu, D. Neculoiu and V. E. Boria, “A 3D Smith Chart based on the Riemann Sphere for Active and Passive Microwave Circuits,” IEEE Microwave and Wireless Components. Letters, vol 21, issue 6, pp 286-288, june, 2011
3. Zelley, Chris; “A spherical representation of the Smith Chart”; IEEE Microwave, vol. 8, pp. 60–66, July 2007
4. Grebennikov, Andrei; Kumar, Narendra; Yarman, Binboga S.; “Broadband RF and Microwave Amplifiers”; Boca Raton: CRC Press, 2016; ISBN 978-1-1388-0020-5

# Statistical analysis using Monte Carlo method (II)

In the previous post, some single examples of the Monte Carlo method were shown. In this post it will be deeply analyzed, making a statistical analysis on a more complex system, analyzing its output variables and studying the results so that they will be quite useful. The advantage of simulation is that it is possible to get a random generation of variables, and also a correlation between variables can be set, achieving different effects in the analysis of the system performance. Thus, any system not only can be analyzed statistically using a random generation of variables, but also this random generation can be linked in a batch analysis or failures in production and in a post-production recovery.

The circuits studied in the previous post were very simple circuits, allowing to see the allocation of random variables and their results when these random variables are integrated a more complex system. With this analysis, it is possible to check the performance and propose corrections which would limit statistically the variations in the final system.

In this case, the dispersive effect of the tolerances will be studied on one of the circuits where it is very difficult to achieve an stability in its features: an electronic filter. An electronic filter, passband type, will be designed and tuned to a fixed frequency, with a certain bandwidth in passband and stopband, and several statistical analysis will be done on it, to check its response with the device tolerances.

DESIGN OF THE BANDPASS FILTER

A bandpass filter design is done, with a 37,5MHz center frequency, 7MHz pass bandwidth (return losses ≥14dB) and a 19MHz stopband bandwidth (stopband attenuation >20dB). When the filter is calculating, three sections are got, and its schematic is

3-sections bandpass filter

With the calculated values of the components, standard values which can make the filter transfer function are found, and its frequency response is

Bandpass filter frequency response

where it is possible to check that the center frequency is 37.5 MHz, the return losses are lower than 14dB at ± 3.5Mhz of the center frequency, and the stopband width is 18,8MHz, with 8,5MHz from the left of the center frequency and 10,3MHz to the right of the center frequency.
Then, once the filter is designed, a first statistical analysis is done, considering that the capacitor tolerance is ± 5% and the inductors are adjustable. In addition, there is not any correlation between the random variables, being able to take an random value independently.

STATISTICAL ANALYSIS OF THE FILTER WITHOUT CORRELATION BETWEEN VARIABLES

As it could be seen in the previous post, when there are random variables there is an output dispersion, so limits to consider a valid filter must be defined, from these limits, to analyze its valid frequency response. Yield analysis is used. This is an analysis using the Monte Carlo algorithm that it allows  to check the performance or effectiveness of the design. To perform this analysis, the limits-for-validation specifications must be defined. The chosen specifications are return losses >13,5dB at 35÷40MHz, with a 2 MHzreduction in the passband width and an attenuation >20dB at frequencies ≤29MHz and ≥48MHz. By statistical analysis, it is got

Statistical analysis of the filter . Variables without correlation.

whose response is bad: only 60% of possible filters generated by variables with a ±5% tolerance could be considered valid. The rest would not be considered valid by a quality control, which would mean that 40% defective material should be returned to the production, to be reprocessed.

It can be checked in the graph that the return loss are the primarily responsible for this bad performance. What could it be done to improve it? In this case, there are 4 random variables. However, two capacitors have of the same value (15pF), and when they are assembled in a production process, usually belong to the same manufacturing batch. If these variables show no correlation, variables can take completely different values. When they are not correlated, the following chart is got

C1, C3 without correlation

However, when these assembled components belong to the same manufacturing batch, their tolerances vary always to the same direction, therefore there is correlation between these variables.

STATISTICAL ANALYSIS OF THE FILTER WITH CORRELATION BETWEEN VARIABLES

When the correlation is used, the influence of tolerances is decreased. In this case, it is not a totally random process, but manufacturing batches in which the variations happen. In this case, it is possible to put a correlation between the variables C1 and C3, which have the same nominal value and belong the same manufacturing batch, so now the correlation graph is

C1, C3 with correlation

where the variation trend in each batch is the same. Then, putting a correlation between the two variables allows studying the effective performance of the filter and get

Statistical analysis with correlation in C1, C3

that it seems even worse. But what happens really? It must be taken into account that the variable correlation has allowed analyzing complete batches, while in the previous analysis was not possible to discern the batches. Therefore, 26 successful complete manufacturing processes have been got, compared to the previous case that it was not possible to discern anything. Then, this shows that from 50 complete manufacturing processes, 26 processes would be successful.

However, 24 complete processes would have to be returned to production with the whole lot. And it remains really a bad result. But there is a solution: the post-production adjustment.

As it was said, at this point the response seems very bad, but remembering that the inductors had set adjustable. What happens now? Doing a new analysis, allowing at these variable to take values in ±10% over the nominal value, and setting the post-production optimization in the Monte Carlo analysis and voilà! Even with a very high defective value, it is possible to recover 96% of the filters within the valid values.

Statistical analysis with post-production optimization

So an improvement is got, because the analysis is showing that it is possible to recover almost all of the batches with the post-production adjustment, so this analysis allows showing not only the defective value but also the recovery posibilities.
It is possible to represent the variations of the inductors (in this case corresponding to the serial resonances) to analyze what is the sensitivity of the circuit to the more critical changes. This analysis allows to set an adjustment pattern to reduce the adjustment time that it should have the filter.

Analysis of the adjustment patterns of the serial resonance inductors

So, with this analysis, done at the same time design, it is possible to take decisions which set the patterns of manufacturing of the products and setting the adjustment patterns for the post-production, knowing previously the statistic response of the designed filter. This analysis is a very important resource before to validate any design.

CONCLUSIONS

In this post, a more grade in the possibilities of using Monte Carlo statistical analysis is shown, using statistical studies. The algorithm provides optimal results and allows setting conditions for various analysis and optimizing more the design. Doing a post-production adjustment, it is possible to get the recovery grade of the proposed design. In the next post, another example of the Monte Carlo method will be done that allows seeing more possibilities over the algorithm.

REFERENCES

1. Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
2. Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid,  Alianza Editorial, 2001, ISBN 84-206-8696-4.
3. Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.

# Análisis estadísticos usando el método de Monte Carlo (II)

En la anterior entrada mostramos con una serie de ejemplos simples cómo funciona el método de Monte Carlo para realizar análisis estadísticos. En esta entrada vamos a profundizar un poco más, haciendo un análisis estadístico más profundo sobre un sistema algo más complejo, analizando una serie de variables de salida y estudiando sus resultados desde una serie de ópticas que resultarán bastante útiles. La ventaja que tiene la simulación es que podemos realizar una generación aleatoria de variables, y además, podemos establecer una correlación de esas variables para conseguir distintos efectos al analizar el funcionamiento de un sistema. Así, cualquier sistema no sólo se puede analizar estadísticamente mediante una generación aleatoria de entradas, sino que podemos vincular esa generación aleatoria a análisis de lotes o fallos en la producción, así como su recuperación post-producción.

Los circuitos que vimos en la anterior entrada eran circuitos muy sencillos que permitían ver cómo funciona la asignación de variables aleatorias y el resultado obtenido cuando estas variables aleatorias forman parte de un sistema más complejo. Con este análisis, podíamos comprobar un funcionamiento y hasta proponer correcciones que, por sí solas, limitasen las variaciones estadísticas del sistema final.

En este caso, vamos a estudiar el efecto dispersivo que tienen las tolerancias sobre uno de los circuitos más difíciles de conseguir su funcionamiento de forma estable: el filtro electrónico. Partiremos de un filtro electrónico de tipo paso banda, sintonizado a una determinada frecuencia y con una anchura de banda de paso y rechazo determinadas, y realizaremos varios análisis estadísticos sobre el mismo, para comprobar su respuesta cuando se somete a las tolerancias de los componentes.

DISEÑO DEL FILTRO PASO BANDA

Vamos a plantear el diseño de un filtro paso banda, centrado a una frecuencia de 37,5MHz, con un ancho de banda de 7MHz para unas pérdidas de retorno mayores que 14dB, y un ancho de banda de rechazo de 19MHz, con atenuación mayor de 20dB. Calculando el filtro, se obtienen 3 secciones, con el siguiente esquema

Filtro paso banda de tres secciones

Con los valores de componentes calculados, se buscan valores estándar que puedan hacer la función de transferencia de este filtro, cuya respuesta es

Respuesta en frecuencia del filtro paso banda

donde podemos ver que la frecuencia central es 37,5MHz, que las pérdidas de retorno están por debajo de 14dB en ±3,5MHz de la frecuencia central y que el ancho de banda de rechazo es de 18,8MHz, con 8,5MHz a la izquierda de la frecuencia central y 10,3MHz a la derecha de la frecuencia central.

Bien, ya tenemos diseñado nuestro filtro, y ahora vamos a hacer un primer análisis estadístico, considerando que las tolerancias de los condensadores son ±5%, y que las inducciones son ajustables. Además, no vamos a indicar correlación en ninguna variable, pudiendo tomar cada variable un valor aleatorio independiente de la otra.

ANÁLISIS ESTADÍSTICO DEL FILTRO SIN CORRELACIÓN ENTRE VARIABLES

Como vimos en la entrada anterior, cuando tenemos variables aleatorias vamos a tener dispersión en la salida, así que lo óptimo es poner unos límites según los cuales podremos considerar el filtro válido, y a partir de ahí analizar cuál es su respuesta. Para ello se recurre al análisis YIELD, que es un análisis que, usando el algoritmo de Monte Carlo, nos permite comprobar el rendimiento o efectividad de nuestro diseño. Para realizar este análisis hay que incluir las especificaciones según las cuales se puede dar el filtro por válido. Las especificaciones elegidas son unas pérdidas de retorno superiores a 13,5dB entre 35÷40MHz, con una reducción de 2MHz en la anchura de banda, y una atenuación mayor de 20dB por debajo de 29MHz y por encima de 48MHz. Haciendo el análisis estadístico obtenemos

Análisis estadístico del filtro. Variables sin correlación.

que, sinceramente, es un desastre: sólo el 60% de los posibles filtros generados por variables con un ±5% de tolerancia podrían considerarse filtros válidos. El resto no serían considerados como válidos en un control de calidad, lo que significaría un 40% de material defectivo que se devolvería al proceso de producción.

De la gráfica se puede ver, además, que son las pérdidas de retorno las principales responsables de que exista tan bajo rendimiento. ¿Qué podemos hacer para mejorar este valor? En este caso, tenemos cuatro variables aleatorias. Sin embargo, dos de ellas son del mismo valor (15pF), que cuando son montadas en un proceso productivo, suelen pertenecer al mismo lote de fabricación. Si estas variables no presentan ninguna correlación, las variables pueden tomar valores completamente dispares. Cuando las variables no presentan correlación, tendremos la siguiente gráfica

Condensadores C1 y C3 sin correlación

Sin embargo, cuando se están montando componentes de un mismo lote de fabricación, las tolerancias que presentan los componentes varían siempre hacia el mismo sitio, por tanto hay correlación entre dichas variables.

ANÁLISIS ESTADÍSTICO DEL FILTRO CON CORRELACIÓN ENTRE VARIABLES

Cuando usamos la correlación entre variables, estamos reduciendo el entorno de variación. En este caso, lo que analizamos no es un proceso totalmente aleatorio, sino lotes de fabricación en los cuales se producen las variaciones. En este caso, hemos establecido la correlación entre las variables C1 y C3, que son del mismo valor nominal y que pertenecen la mismo lote de fabricación, por lo que ahora tendremos

Condensadores C1 y C3 con correlación

donde podemos ver que la tendencia a la variación en cada lote es la misma. Estableciendo entonces la correlación entre ambas variables, estudiamos el rendimiento efectivo de nuestro filtro y obtenemos

que parece todavía más desastroso. Pero ¿es así? Tenemos que tener en cuenta que la correlación entre variables nos ha permitido analizar lotes completos de fabricación, mientras que en el análisis anterior no se podía discernir los lotes. Por tanto, lo que aquí hemos obtenido son 26 procesos de fabricación completos exitosos, frente al caso anterior que no permitía discernir nada. Por tanto, esto lo que nos muestra es que de 50 procesos completos de fabricación, obtendríamos que 26 procesos serían exitosos.

Sin embargo, 24 procesos completos tendrían que ser devueltos a la producción con todo el lote. Lo que sigue siendo, realmente, un desastre y el Director de Producción estaría echando humo. Pero vamos a darle una alegría y a justificar lo que ha intentado siempre que no exista: el ajuste post-producción.

Como ya he dicho, a estas alturas el Director de Producción está pensando en descuartizarte poco a poco, sin embargo, queda un as en la manga, recordando que las inducciones las hemos puesto de modo que sean ajustables. ¿Tendrá esto éxito? Para ello hacemos un nuevo análisis, dando valores variables en un entorno de ±10% sobre los valores nominales, y activamos el proceso de ajuste post-producción en el análisis y ¡voilà! Aun teniendo un defectivo antes del ajuste muy elevado, logramos recuperar el 96% de los filtros dentro de los valores que se habían elegido como válidos

Bueno, hemos ganado que el Director de Producción no nos corte en cachitos, ya que el proceso nos está indicando que podemos recuperar la práctica totalidad de los lotes, eso sí, con el ajuste, por lo que con este análisis podemos mostrar no sólo el defectivo sino la capacidad de recuperación del mismo.

Podemos representar cómo han variado las inducciones (en este caso las correspondientes a las resonancias en serie) para poder analizar cuál es la sensibilidad del circuito frente a las variaciones más críticas. Este análisis permite establecer un patrón de ajuste para reducir el tiempo en el que se debe de tener un filtro exitoso.

Análisis de los patrones de ajuste en las inducciones de las resonancias serie

Así, con este tipo de análisis, realizado en el mismo momento del diseño, es posible tomar decisiones que fijen los patrones posteriores de la fabricación de los equipos y sistemas, pudiendo establecer patrones fijos de ajuste post-producción sencillos al conocer de antemano la respuesta estadística del filtro diseñado. Una cosa muy clara que he tenido siempre, es que cuando no he hecho este análisis, el resultado es tan desastroso como muestra la estadística, así que mi recomendación como diseñador es dedicarle tiempo a aprender cómo funciona y hacerle antes de que le digas a Producción que tu diseño está acabado.

CONCLUSIONES

En esta entrada hemos querido mostrar un paso más en las posibilidades del análisis estadístico usando Monte Carlo, avanzando en las posibilidades que muestra el método a la hora de hacer estudios estadísticos. El algoritmo nos proporciona resultados y nos permite fijar condicionantes para realizar diversos análisis y poder optimizar más si se puede cualquier sistema. Hemos acudido hasta a un ajuste post-producción, a fin de calmar la ira de nuestro Director de Producción, que ya estaba echando humo con el defectivo que le estábamos proporcionando. En la siguiente entrada, abundaremos un poco más en el método con otro ejemplo que nos permita ver más posibilidades en el algoritmo.

REFERENCIAS

1. Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
2. Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid,  Alianza Editorial, 2001, ISBN 84-206-8696-4.
3. Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.

# Statistical analysis using Monte Carlo method (I)

When any electronic device is designed, we can use several deterministic methods for calculating its main parameters. So, we can get the parameters that we measure physically in any device or system. These preliminary calculations allow the development and their results are usually agreed with the prediction. However, we know that everything we manufacture is always subject to tolerances. And these tolerances cause variations in the results that often can not be analyzed easily, without a powerful calculation application. In 1944, Newmann and Ulam developed a non-deterministic, statistical method called Monte Carlo. In the following blog post.  we are going to analyze the use of this powerful method for predicting possible tolerances in circuits, especially when they are manufactured industrially.

In any process, the output result is a function of the input variables. These variables generate a response which can be determined, both if the system is linear and if it is not linear. The relationship between the response and the input variables is called transfer function, and its knowledge allows us to get any result concerning the input excitation.

However, it must be taken in account that the input variables are random variables, with their own distribution function, and are subject to stochastic processes, although their behavior is predictable through the Theory of Probability. For example, when we make any measure, we get its average value and the error in which can be measured that magnitude. This allows to limit the environment in which it is correct and decide when the magnitude behaves incorrectly.

For many years, I have learned to successfully transform the results obtained by simulations in real physical results, with predictable behavior and I got valid conclusions, and I have noticed that in most cases the use of the simulation is reduced to get the desired result without studying the dependence of the variables in that result. However, most simulators have very useful statistical algorithms that, properly used, allow to get a series of data that the designer can use in the future, predicting any system behavior, or at least analyzing what it can happen.

However, these methods are not usually used. Either for knowledge lack of statistical patterns, or for ignorance of how these patterns can be used. Therefore, in these posts we shall analyze the Monte Carlo method on circuit simulations and we shall discover an important tool which is unknown to many simulator users.

DEVICES LIKE RANDOM VARIABLES

Electronic circuits are made by simple electronic devices, but they have a statistical behavior due to manufacturing. Device manufacturers usually show their nominal values and tolerances. Thus, a resistance manufacturer not only publishes its rating values and its dimensions. Tolerances, stress, temperature dependance, etc., are also published. These parameters provide important information, and propertly analyzed with a powerful calculation tool (such as a simulator), we can predict the behavior of any complex circuit.

In this post, we are going to analyze exclusively the error environment around the nominal value, in one resistor. In any resistor, the manufacturer defines its nominal value and its tolerance. We asume these values 1kΩ for the nominal value and ± 5% for its tolerance. It means the resistance value can be found between 950Ω and 1,05kΩ. In the case of a bipolar transistor, the current gain β could take a value between 100 and 600 (i.e. NXP BC817), which may be an important and uncontrollable variation of current collector. Therefore, knowing these data, we can analyze the statistical behavior of an electronic circuit through the Monte Carlo method.

First, let us look resistance: we have said that the resistance has a ± 5% tolerance. Then, we will analyze the resistor behavior with the Monte Carlo method, using a circuit simulator. A priori, we do not know the probability function, although most common is a Gaussian function, whose expression is well known

$f_{\mu,\sigma^2}(x)=\dfrac {1}{\sigma \sqrt {2 \pi}}e^{\dfrac {(x-\mu)^2}{\sigma^2}}$

being μ the mean and σ² the variance. Analyzing by the simulator, through Monte Carlo method and with 2000 samples, we can get a histogram of resistance value, like it is shown in the next figure

Histogram of the resistor

Monte Carlo algorithm introduces a variable whose value corresponds to a Gaussian distribution, but the values it takes are random. If these 2000 samples were taken in five different processes with 400 samples each one, we would still find a Gaussian tendency, but their distribution would be different

Gaussian distributions with different processes

Therefore, working properly with the random variables, we can get a complete study of the feasibility of any design and the sensitivity that each variable shows. In the next example, we are going to analyze the bias point of a bipolar transistor, whose β variation is between 100 and 600, being the average value 350 (β is considered a Gaussian distribution), feeding it with resistors with a nominal tolerance of ± 5% and studying the collector current variation using 100 samples.

STATISTICAL ANALYSIS OF A BJT BEHAVIOR IN DC

Now, we are going to study the behavior of a bias circuit, with a bipolar transistor, like the next figure

Bias point circuit of a BJT

where the resistors have a ±5% tolerance and the transistor has a β variation between 100 and 600, with a nominal value of 350. Its bias point is  Ic=1,8mA, Vce=3,2V. Making a Monte Carlo analysis, with 100 samples, we can get the next result

BJT current distribution respect to the random variables

Seeing the graph form, we can check that the result converges to a Gaussian distribution, being the average value Ic=1,8mA and its tolerance, ±28%. Suppose now that we do the same sweep before processing, in several batches of 100 samples each one. The obtained result is

BJT current distribution respect several batches

where we can see that in each batch we get a graph which converges to a Gaussian distribution. In this case, the Gaussian distribution has an average value μ=1,8mA and a variance σ²=7%. Thus, we have been able to analyze each process not only like a global statistical analysis but also like a batch. Suppose now that β is a random variable with an uniform distribution function, between 100 and 600. By analyzing only 100 samples, the next graphic is got

Results with a BETA uniform distribution

and it can be seen that the current converges to an uniform distribution, increasing the current tolerance range and the probability at the ends. Therefore, we can also study the circuit behaviour when it shows different distribution functions for each variable.

Seeing that, with the Monte Carlo method, we are able to analyze any complex circuit behavior in terms of tolerances, in the same way it will help us to study how we could correct those results. Therefore, in the next posts we shall analyzed deeply this method, increasing the study of its potential and what we can be achieved with it.

CORRECTING THE TOLERANCES

In the simulated circuit, when we have characterized the transistor β like an uniform random variable, we have increased the probability into unwanted current values (at the ends). This is one of the most problematic features, not only on bipolar transistors but also on field effect transistor: the variations of their current ratios. This simple example let see what happens when we use a typical correction circuit for the β variation, like the classic polarization by emitter resistance.

Bias circuit by emitter resistance

Using this circuit and analyzing by Monte Carlo, we can compare its results with the analysis obtained in the previous case, but using 1000 samples. The result is

Results with both circuits

where we can check that the probability values have increased around 2mA, reducing the probability density at the low values of current and narrowing the distribution function. Therefore, the Monte Carlo method is a method that not only enables us to analyze the behavior of a circuit when subjected to a statistical, but also allow us to optimize our circuit and adjust it to the desired limit values. Used properly, it is a powerful calculation tool that will improve the knowledge of our circuits.

CONCLUSIONS

In this first post, we wish to begin a serie dedicated to Monte Carlo method. In it, we wanted to show the method and its usefulness. As we have seen in the examples, the use of Monte Carlo method provides very useful data, especially with the limitations and variations of the circuit we are analyzing if we know how they are characterized. On the other hand, it allows us to improve it using statistical studies, in addition to setting the standards for the verification of in any production process.

In the next posts we shall go more in depth on the method, by performing a more comprehensive method through the study of a specific circuit of one of my most recent projects, analyzing what the expected results and the different simulations that can be performed using the method of Monte Carlo, like the worst case, the sensitivity, and the post-production optimization.

REFERENCES

1. Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
2. Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid,  Alianza Editorial, 2001, ISBN 84-206-8696-4.
3. Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.