Archivo de la etiqueta: physics

Simulation on Physical Systems

I take a long time writing many post about the simulation. Main reason is because I have learned for many years the value of using computers for physical system analysis. Without these tools, I would never be able to get reliable results, because of the amount of calculations I would have to do. Modern simulators, able to solve complex calculations using the computers capacity, allow us to get a more realistic behavior for a complex system, knowing its structures. Physics and Engineering work every day with simulations to get better predictions and take decisions. In this post, I am going to show what are the most important parts we should be kept in mind about the simulation.

In 1982, physicist Richard Feynman published an article where he talked about the analysis of physical systems using computers (1). In those years, computer technology had progressed to a high level that it was possible to achieve a greater calculation capacity. New programming languages worked with complex formulas, such as FORTRAN, and allowed the calculations on systems by complex integro-differential equations, which resolution usually needed numerical methods. So, in those first years, physicists began to do simulations with programs able to solve the constitutive system equations, although not always with simple descriptions.

A great step forward in electronics was the SPICE program, at the beginning of 70s (2). This program, FORTRAN-based, was able to compute non-linear electronic circuits, removing the radiation effects, and solve the time-domain integral-differential equations. Over the years, the Berkeley’s SPICE became the first reference on simulation programs and its success being such that almost all the simulation programs developed along last years have its base on the Nagel and Pederson algorithms, developed in 70s.

From 80s, and searching to solve three-dimensional problems, the method of moments (MoM) was developed. It was come to solve systems raised as integral equations in the boundaries (3), being very popular. It was used in Fluid Mechanics, Acoustic Waves and Electromagnetism. Today, this one is still used to solve two-dimensional electromagnetic structures.

But the algorithms have got a huge progress, with the emergence of new finite element methods (FEM, frequency-domain) and time-domain finite differences (FDTD, time-domain) in 90s, based on the resolution of systems formulated by differential equations, important benchmarks on the generation of new algorithms able to solve complex systems (4). And with these new advances, the simulation contribution in Physics came to take spectacular dimensions.

WHAT IS THE VALUE OF AN ACCURATE MODEL?

When we are studying any physical phenomenon, we usually invoke a model. Whether an isolated phenomenon or within an environment, whether in Acoustic Waves, Electromagnetism or Quantum Mechanics, having a well-characterized model is essential to get its behavior, in terms of its variables. Using an accurate model increases our certainty on the results.

However, modeling is complex. It is needed to know what are the relationships between variables and from here, determine a formulation system that defines the behavior within a computer.

A model example is a piezoelectric material. In Electronics, piezoelectric materials are commonly used as resonators and it is usually to see these electronic devices (quartz or any other resonant material based on this property).

A piezoelectric model, very successful in the 40s, was developed by Mason (5). Thanks to the similarity between the Electromagnetic and Acoustic waves, he got to join both properties using transmission lines, based in the telegraphist’s equations, writing the constitutive equations. In this way, he developed a piezoelectric model which is still used today. This model can be seen in Fig. 1 and it has already been studied in previous posts.

Fig.1 - Mason Model

Fig.1 – Modelo de piezoeléctrico de Mason

This model practically solved the small signal analysis in frequency domain, getting an impedance resonance trace as it is shown in Fig. 2

Fig.2 – Resultados del análisis del modelo de Mason

However, the models need to expand their predictive capacity.

The Mason model describes the piezoelectric behavior rightly when we are working in a linear mode. But it has faults when we need to know the large signal behavior. So new advances in the piezoelectric material studies included the non-linear relationships in its constitutive equations (6).

Fig. 3 – Modelo tridimensional de una inducción

In three-dimensional models, we must know well what are the characteristics that define the materials to have an optimal results. In the induction shown in Fig. 3, CoFeHfO is being used as a magnetic material. It has a frequency-dependent complex magnetic permeability that must be defined in the libraries.

The results will be better as the model is defined better, and this is the fundamental Physicist task: getting a reliable model from the studies on the phenomena and the materials.

The way to extract a model is usually done by direct measurement or through the derived magnitudes, using equations systems. With a right model definition, the simulation results will be more reliable.

ANALYSIS USING SIMULATION

Once the model is rightly defined, we can perform an analysis by simulation. In this case, we will study the H-field inside the inductor, at 200 MHz, using the FEM analysis, and we are going to draw this one, being shown in Fig. 4.

Fig. 4 – Excitación magnética en el interior del inductor

The result is drawn in a vector mode, since we have chosen that representation to see the H-field direction inside the inductor. We can verify, first, that the maximum H-field is inside the inductor, to the positive section on Y axis in the upper area, while in the lower part the orientation the inverse. The maximum H-field level obtained is 2330 A/m with 1 W excitation between the inductor electrodes.

The behavior is precisely that of an induction whose value can also be estimated by calculating its impedance and drawiing it on Smith’s chart, Fig. 5.

Fig. 5 – Impedancia del inductor sobre carta de Smith

The Smith’s chart trace clearly shows an inductive impedance, which value decreases when the frequency increases, because of losses of the CoFeHfO magnetic material. Besides, these losses contribute to the resistance increasing with frequency. There will be a maximum Q in the useful band

Fig. 6 – Factor de calidad del inductor

Having a induction with losses a quality factor Q, we can draw it as a function of the frequency in Fig. 6.

Therefore, with the FEM simulation we have been able to analyze the physical parameters on a modeled structure that would have cost us much more time and effort to get by means of complex calculations and equations. This shows, as Feynman pointed out in that 1982 conference, the simulation powerful when there are accurate models and proper software to perform these analyzes.

However, the simulation has not always had the chance to get the best results. Precisely is the previous step, the importance of having an accurate model, which faithfully defines the physical behavior of any structure, which will ensure the reliability of the results.

EXPERIMENTAL RESULTS

The best way to check if the simulation is valid is to resort getting experimental results. Fortunately, the simulation performed on the previous inductor is got from (7), and, in this reference, the authors show experimental results that validate the results of the inductor model. In Fig. 7 and 8 we can see the inductance and resistance values, and adding the quality factor, can be compared with the experimental results of the authors.

Fig. 7 – Valor de la inductancia en función de la frecuencia

Fig. 8 – Valor de la resistencia efectiva en función de la frecuencia

The results obtained by the authors, using HFSS for the simulation of the inductor, can be seen in Fig. 9. The authors have done the simulation on the structure with and without core, and show the simulation against the experimental result . Seeing the graphs, it can be concluded that the results got in the simulation have a high level of concordance with those obtained through the experimental measurements.

This shows us that the simulation is effective when the model is reliable, and that a model is accurate when the results obtained through the simulation converge with the experimental results. In this way, we have a powerful analysis tool that will allow us to know in advance the behavior of a structure and make decisions before moving on to the prototyping process.

Fig. 9 – Resultados experimentales

In any case, convergence is also important in a simulation. The FEM simulation needs that the mesh is so accurate as getting a good convergence. A low convergence level gives results far from the optimum, and very complex structures require a lot of processing speed, a high RAM use and, sometimes, must even perform a simulation on several processors. To more complex structures, the simulation time increases considerably, and that is one of its main disadvantages.

Although the FEM simulators allow the optimization of the values ​​and even today the integration with other simulators, they are still simulators that require, due to the complexity of the calculations to be carried out, powerful computers that allow to make those calculations with reliability.

CONCLUSIONS

Once again, we agree with Feynman when, in that 1982 seminar, he chose precisely a topic which seemed to have no interest for the audience. Since that publication, Feynman’s article has become a classic of Physics publications. The experience that I have got over the years with several simulators, shows me that the way opened by them will have a considerable advance when quantum computers are a reality and their processing speed raises, allowing that these tools get reliable results in a short space of time.

The simulation in the physical systems has been an important progress to get results without needing to realize previous prototypes and supposes an important saving in the research and development costs.

REFERENCES

  1. Feynman, R; “Simulating Physics with Computers”; International Journal of Theoretical Physics, 1982, Vols. 21, Issue 6-7, pp. 467-488, DOI: 10.1007/BF02650179.
  2. Nagel, Laurence W. and Pederson, D.O. “SPICE (Simulation Program with Integrated Circuit Emphasis)”, EECS Department, University of California, Berkeley, 1973, UCB/ERL M382.
  3. Gibson, Walton C., “The Method of Moments in Electromagnetics”, Segunda Edición, CRC Press, 2014, ISBN: 978-1-4822-3579-1.
  4. Reddy, J.N, “An Introduction to the Finite Element Method”, Segunda Edición,  McGraw-Hill, 1993, ISBN: 0-07-051355-4.
  5. Mason, Warren P., “Electromechanical Transducers and Wave Filters”, Segunda Edición, Van Nostrand Reinhold Inc., 1942, ISBN: 978-0-4420-5164-8.
  6. Dong, S. Shim and Feld, David A., “A General Nonlinear Mason Model of Arbitrary Nonlinearities in a Piezoelectric Film”, IEEE International Ultrasonics Symposium Proceedings, 2010, pp. 295-300.
  7. Li, LiangLiang, et al. 4, “Small-Resistance and High-Quality-Factor Magnetic Integrated Inductors on PCB”, IEEE Transactions on Advanced Packaging, Vol. 32, pp. 780-787, November 2009, DOI: 10.1109/TADVP.2009.2019845.
Anuncios

Studying slotline transmission lines

PCB transmission lines are an optimal and low cost solution to make guided propagation at very high frequencies. The most popular lines are microstrip and coplanar waveguide. These transmission lines are easily realizable in a printed circuit board and whose impedance can be calculated from their dimensions. In these lines, TEM modes (transverse electromagnetic) are propagated, in which there is no component in the direction of propagation. However, there are other very popular lines that can also be used at high frequencies and are known as slotlines. In this post, we are going to study the electrical behavior of slotlines and some microwave circuits that can be done with them.

At high frequencies, lines usually behave like distributed transmission lines. Therefore, it is necessary to know its impedance so that there are no losses during propagation.

The microstrip and coplanar waveguides are very popular, since they are easily implemented on a printed circuit board, they are cheap and can be easily calculated. In both lines, the propagation mode is TEM, there are no field components in the direction of propagation, and their characteristic impedance Zc and wavelength λg depend on the line dimensions and the dielectric substrate which supports them.

There is another type of line, which is usually used at very high frequencies: the slotline. This line is one slot on the copper plane through which a transverse electric mode is propagated (specifically the TE01 mode, as shown in the following figure).

Fig. 1 –  TE01 mode on a slotline

The field is confined near the slot so that the propagation has the minimum possible losses, and as the microstrip lines, there is a discontinuity due to the dielectric substrate and air. It is used as a transmission line with substrates with a high dielectric constant (around εr≥9.2), in order to confine the fields as close as possible to the slot, although they can be used as couplings on substrates with lower dielectric constants. In this way, flat antennas can be fed with the slotlines.

In this post, we will pay attention to its use as transmission lines (with high dielectric constants), and the microwave circuits that we can make with them, studying the transitions between both technologies (slotline to microstrip).

ANALYZING THE SLOTLINE TRANSMISSION LINE

Being a transmission line and like the other lines, the slotline has a characteristic impedance Zc and a wavelength λs. But besides, using the TE01 propagation mode, the electric field component which is propagated, in cylindrical coordinates, is Eφ, as it is shown in the next figure

Fig. 2 – Eφ component

This component is calculated from the magnetic components Hr and Hz, considering the Z-axis the propagation direction, which is perpendicular to the electric field. From here, we get an expression for the propagation constant kc which is

Fig. 3 – Eφ and kc expressions

where λ0 is the wavelength of the propagated field. The first thing is deduced from the expression of kc is that we will find a cuttoff wavelength λs, from which the field propagates as mode TE01, since λ0≤λs so that kc is real and there is propagation. This means that there will be a cuttoff thickness for the substrate which will depend on the dielectric constant εr. The expression for that cuttoff thickness, where there is no propagation at TE01 mode, is

Fig. 4 – Substrate cutoff thickness

With these expressions, Gupta (see [1], page 283) got the expressions for the line impedance Zc and the line wavelength λs, which will allow us to typify the transmission line, making microwave circuits with slotlines.

ANALYZING A SLOTLINE

As the microstrip and coplanar waveguides, slotline can be analyzed using a FEM electromagnetic simulator. We are going to study one transmission line on an RT/Duroid 6010 substrate, which dielectric constant is εr=10,8, with 0,5mm thickness. The slot width is 5mil. According to the impedance calculations, Zc is 68,4Ω and λs, 14,6mm, at 10GHz. In a 3D view, the slotline is the next

Fig. 5 – Slotline 3D view

The next graph shows the S parameters at 50Ω impedance of generator and load.

Fig. 6 – Slotline S parameters

On the Smith chart

Fig. 7 – Slotline impedance on Smith Chart

where the impedance is 36,8-j·24,4Ω at 10GHz.

It is possible to show the propagated surface current on the line in 3D view

Fig. 8 – Slot surface current, in A/m

where it can be seen that the surface current is confined as near as possible the slot. From this current, the H-field can be derived and therefore the E-field which only has a transversal component. It can be also seen two maxima on the current magnitude, which shows that the slot distance is λs.

The FEM simulation allows us to analyze the slotline lines and build microwave circuits, knowing the characterization shown in [1].

SLOTLINE-TO-MICROSTRIP TRANSITIONS

Like the slotline is one slot made on a copper plane, transitions can be made from slotline to microstrip. One typical transition is the next

Fig. 9 – Slotline-to-microstrip transitions

Microstrip lines finish in a λm/4 open circuit stub, so the current is minimal at the open circuit and maximum at the transition location. In the same way, the slotline finishes in a λs/4 short circuit stub, with the minimum surface current at the transition location. Then, the equivalent circuit for each transition is

Fig. 10 -Equivalent circuit for a slotline-to-microstrip transition

Using the FEM simulator it is possible to study how a transition behaves. The next graph shows its S parameters. The transition has been made on RT/Duroid 6010, with 70mil thickness and 25mil slot widths. The microstrip width is 50mil and the working band is 0,7÷2,7GHz.

Fig. 11 – Transition S parameters

and showing the surface current on the transition, it ts the next

Fig. 12 – Current on the transition.

where it can be seen the coupling of the current and its distribution on the slotline.

ANOTHER MICROWAVE CIRCUITS BASED ON SLOTLINES

The slotline is a versatile line. Combined with microstrip (the microstrip ground plane can include slots), it allows us to make a series of interesting circuits, such as those shown in fig. 13

Fig. 13 – Microwave circuits with slotline and microstrip.

The 13 (a) circuit shows a balum with slotline and microstrip technology, where the microstrip is shorted to ground in the transition. The balanced part is the slotline section, since both ground planes are working like differential ports, while the unbalanced part is the microstrip, referring to the ground plane where the slots are placed. With this circuit it is possible to build frequency mixers or balanced mixers. Another interesting circuit is shown in 13 (b), a “rat-race” where the microstrip circuit is not closed, but is coupled through a slot to get the coupling. In 13 (c), a “branchline” coupler is shown, using a slotline and, finally, in 13 (d), a Ronde coupler is shown. This last circuit is ideal to equalize the odd/even mode phase velocities.

CONCLUSIONS

In this post, we have analyzed the slotline used like a microwave transmission line, compared with another technologie. Besides we have made a small behavior analysis using an FEM simulator, checking the possibilities of the line analysis (S parameters and surface current analysis) and we have shown some circuits that can be made with this technology, verifying the versatility of this transmission line.

REFERENCES

  1. Gupta, K.C., et al. “Microstrip Lines and Slotlines”. 2nd. s.l. : Artech House, Inc, 1996. ISBN 0-89006-766-X.

Simulating transitions with waveguides

adapterWaveguides are transmission lines widely used in very high frequency applications as guided propagation devices. Their main advantages are the reduction of losses in the propagation, due to the use of a single conductor and air, instead of using dielectrics as in the coaxial cable, a greater capacity to use high power and a simple building. Their main drawbacks are usually that they are bulky devices, that they cannot operate below their cutoff frequency and that the guide transitions to other technologies (such as coaxial or microstrip) have often losses. However, finite element method (FEM) simulation allows us to study and optimize the transitions that can be built with these devices, getting very good results. In this post we will study the waveguides using an FEM simulator such as HFSS, which is able to analyze tridimensional electromagnetic fields (3D simulation).

Waveguides are very popular in very high frequency circuits, due to the ease of their building and their low losses. The propagated fields, unlike the coaxial guides, are electric or magnetic transverse (TE or TM fields), so they have a magnetic field component (TE) or electric field (TM) in the propagation direction. These fields are the solutions for the Helmholtz equation under certain boundary conditions

Fig. 1 – Helmholtz equation for both modes

and solving these differential equations by separation of variables, and applying the boundary conditions of a rectangular enclosure, where all the walls are electrical walls (conductors, in which the tangential component of the electric field is canceled)

Fig. 2 – Boundary conditions on a rectangular waveguide

we can obtain a set of solutions for the electromagnetic field inside the guide, starting from the solution obtained for the expressions shown in fig. 1.

Fig. 3 – Table of electromagnetic fields and parameters in rectangular waveguides

Therefore, electromagnetic fields are propagated like propagation modes, called TEmn, for the transverse electric (Ez=0), or TMmn, for the transverse magnetic (Hz=0). From the propagation constant Kc is got an expression for the cutoff frequencyfc, which is the lowest frequency for propagating fields inside the waveguide, which expression is

Fig. 4 – Cuttoff frequency for a rectangular waveguide

The lowest mode is when m=0, since although the function has extremes for m,n=0, the modes TE00 or TM00 do not exist. And like a>b, the lowest cutoff frequency of the waveguide is for the mode TE10. That is the mode we are going to analyze using a 3D FEM simulation.

SIMULATION OF A RECTANGULAR WAVEGUIDE BY THE FINITE ELEMENTS METHOD

In a 3D simulator it is very easy to model a rectangular waveguide, since it is enough to draw a rectangular prism with the appropriate dimensions a and b. In this case, a=3,10mm and b=1,55mm. The TE10 mode start to propagate at 48GHz the next mode, TE01, at 97GHz, then the waveguide is analyzed at 76GHz, frequency in which it will work. Drawing the waveguide in HFSS, it is shown so

Fig. 5 – Rectangular waveguide. HFSS model

The inner rectangular prism is assigned to vacuum, and the side faces are assigned perfect E boundaries. Two wave ports are assigned on the rectangles at -z/2 and +z/2 , using the first propagation mode. The next figure shows the E-field along the waveguide

Fig. 6 – Electric field inside the waveguide

Analyzing the Scattering parameters from 40 to 90GHz, it is got

Fig. 7 – S parameters for the rectangular waveguide

where it can be seen that the first mode starts to propagate inside the waveguide at 48,5GHz.

From 97GHz, TE01 mode could be propagated too, it does not interest us, then the analysis is done at 76GHz.

WAVEGUIDE TRANSITIONS

The most common transitions are from waveguide to coaxial, or from waveguide to microstrip line, to be able to use the propagated energy in another kind of applications. For this, a probe is placed in the direction of the E-field, coupling its energy on the probe. (TE01 mode is in Y-axis)

Fig. 8 – Probe location

The probe is a quarter wavelength resonant antenna at the desired frequency. In X-axis, E-field maximum value happens at x=a/2, while to find the maximum in Z-axis, the guide is finished in a short circuit. So, E-field is null on the guide wall, being maximum at a quarter guide wavelength which is

Fig. 9 – Guide wavelength

and in our case, at 76GHz, λ is 3,95mm and λg, 5,11mm. Then, the probe length will be 0,99mm and the shortcircuit distance, 2,56mm.

In coaxial transitions, it is enough to put a coax whose internal conductor protrude λ/4 at λg/4 from the shortcircuit. But in microstrip transitions dielectrics are used as support of the conductor lines, then it should be kept in mindpor the dielectric effect, too.

Our transition can be modeled in HFSS by assigning different materials. The probe is built on Rogers RO3003 substrate, with low dielectric constant and losses, making the transition to microstrip. The lateral faces and the lines are assigned to perfect E boundaries, and form of the substrate, to a RO3003 material. The waveguide inside and the transition cavity is assigned to vacuum. In the extreme face of the transition, a wave port is assigned.

Fig. 10 – Rectangular waveguide to microstrip transition

Now, the simulation is done analyzing the fields and S parameters.

Fig. 11 – E-field on the transition

and it can be seen how the E-field couples to the probe and the signal is propagated along the microstrip.

Fig. 12 – Transition S parameters

Seeing the S parameters, we can see that the least loss coupling happens at 76÷78GHz, our working frequency.

OTHER DEVICES IN WAVEGUIDES: THE MAGIC TEE

Among the usual waveguide devices, one of the most popular is the Magic Tee, a special combiner which can be used like a divider, a combiner and a signal adder/subtractor.

Fig. 13 – Magic Tee

Its behavior is very simple: when an EM field is fed by port 2, the signal is divided and in phase by ports 1 and 3. Port 4 is isolated because its E-plane is perpendicular to the port 2 E-plane. But if the EM field is fed by port 4, it is divided into ports 1 and 3 in phase opposition (180deg) while port 2 is now isolated.

Using the FEM simulation to analyze the Magic Tee, and feeding the power through port 2, it is got the next response

Fig. 14 – E-field inside the Magic Tee feeding by the port 2.

and the power is splitted in ports 1 and 3 while port 4 is isolated. Doing the same operation from port 4, it is got

Fig. 15 – E-field inside the Magic Tee feeding by the port 4.

where now port 2 is isolated.

To see the phases, it is used a vector plot of the E-field

Fig. 16 – Vector E-field inside the Magic Tee feeding by the port 2

where it is seen that the field in ports 1 and 3 has the same direction and therefore they are in phase. Feeding from port 4

Fig. 17 – Vector E-field inside the Magic Tee feeding by the port 2

in which it is seen that the signals in port 1 and 3 has the same level, but in phase opposition (180deg between them).

FEM simulation allows us to analyze the behavior of the EM field from different points of view, only changing the excitations. For example, feeding a signal in phase by port 2 and 4, both signals will be added in phase at port3 and will be nulled at port 1.

Fig. 18 – E-field inside the feeding by ports 2 and 4 in phase.

whereas if inverting the phase in port 2 or port 4, the signals will be added at port 1 and will be nulled at port 3.

Fig. 19 – E-field inside the feeding by ports 2 and 4 in phase opposition

and the result is a signal adder/subtractor.

CONCLUSIONS

The object of this post was the analysis of the electrical behavior of the waveguides using a 3D FEM simulator. The advantage of using these simulators is that they allow to analyze with good precision the EM fields on three-dimensional structures, being the modeling the most important part to rightly define the structure to be studied, since a 3D simulator requires meshing in the structure, and this meshing, as it needs a high number of tetrahedra to achieve good convergence, also tends to need more machine memory and processing capacity.
The structures analyzed, due to their simplicity, have not required long simulation time and relevant processing capacity, but as the models become more complex, the processing capacity increases, it it is needed to achieve a good accuracy.

In subsequent posts, another methods to reduce modeling in complex structures will be analyzed, through the use of planes of symmetry that allow us to divide the structure and reduce meshing considerably..

REFERENCES

  1. Daniel G. Swanson, Jr.,Wolfgang J. R. Hoefer; “Microwave Circuit Modeling Using Electromagnetic Field Simulation”; Artech House, 2003, ISBN 1-58053-308-6
  2. Paul Wade, “Rectangular Waveguide to Coax Transition Design”, QEX, Nov/Dec 2006

Using the Three-Dimensional Smith Chart

The Smith Chart is a standard tool in RF design. Developed by Phillip Smith in 1939, it has become the most popular graphic method for representing impedances and solving operations with complex numbers. Traditionally, the Smith Chart has been used as 2-D polar form, centered at an unit radius circle. However, the 2D format has some restrictions when the active impedances (oscillators) or stability circles (amplifiers) are represented, since these ones usually leave the polar chart. Last years, three-dimensional Smith Chart has become popular. Advances in 3D rendering software make it easy to use for design. In this post, I will try to show the handling of the three-dimensional Smith Chart and its application for a low-noise amplifier design.

When Phillip Smith was working at Bell Labs, he have to match one antenna and he was looked for a way to solve the design graphically. By means of the mathematical expressions that define the impedances in the transmission lines, he got to represent the impedance complex plane by circles with constant resistances and reactances. These circles made it easier for him to represent any impedance in a polar space, with the maximum matching placed in the center of the chart and the outer circle representing the pure reactance. Traditionally, Smith’s Chart has been represented in polar form as shown below

Fig. 1 – Traditional Smith’s Chart

The impedance is normalized calculating the ratio between the impedance and the generator impedance. The center of the chart is pure unit resistance (maximum matching) while the peripheral circle that limits the chart is the pure reactance. The left end of the chart represents the pure short circuit and the right end, the pure open circuit. The chart was then very popular to be able to perform calculations for matching networks with transmission lines using a graphical method. However, the design difficulties with the chart happened when active impedances were analyzed, studying amplifiers stability and designing oscillators.

By its design, the chart is limited to the impedances with positive real part, but it could represent, extending the complex plane through the Möbius transformation, impedances with negative real part [1]. This expanded chart, to the negative real part plane, can be seen in the following figure

Fig. 2- Smith’s Chart expanded to active impedances

However,this chart shows two issues: 1) although it allows to represent all the impedances, there is a problem with the complex infinity, so it remains limited and 2) the chart has large dimensions that make it difficult to us in a graphic environment, even in a computer-aided environment. However, the extension is needed when the amplifier stability circles are analyzing, since in most of cases the centers of these circles are located outside the passive impedance chart.

In a graphical computer environment, representing the circles is already performed by the software itself through the calculations, being able to limit the chart to the passive region and drawing only a part of the circle of stability. But with oscillators still have the problem of complex infinity, which could be solved through a representation in a Riemann’s sphere.

RIEMANN’S SPHERE

The Riemann’s sphere is a mathematical solution for representing the complete complex plane, including infinity. The entire complex surface is represented on a spherical surface by a stereographic projection of this plane.

Fig. 3 – Projection of the complex plane on a sphere

In this graphic form the southern hemisphere represents the origin, the northern hemisphere represents infinity and the equator the circle of unitary radius. The distribution of complex values in the sphere can be seen in the following figure

Fig. 4 – Distribution of complex values in the sphere

So, it is possible to represent any complex number on a surface easy to handle.

SMITH’S CHART ON A RIEMANN’S SPHERE

Since Smith’s Chart is a complex representation, it can be projected in the same way to a Riemann’s sphere [2], as shown in the following figure

Fig. 5 – Projection of the Smith’s Chart on a Riemann’s sphere

In this case, the northern hemisphere shows the impedances with positive resistance (passive impedances), in the southern hemisphere, the impedances with negative resistance (active impedances), in the eastern hemisphere, the inductive impedances, and in the western one the capacitive impedances. The main meridian shows the pure resistive impedance.

Thus, when we wish to represent any impedance, either active or passive, it can be represented at any point in the sphere, greatly facilitating its drawing. In the same way, we can represent the stability circles of any amplifier without having to expand the chart. For example, if we want to represent the stability circles for one transistor, which parameters S at 3GHz are the next

S11=0,82/-69,5   S21=5,66/113,8   S12=0,03/48,8  S22=0,72/-37,6

its representation in the conventional Smith’s Chart is

Fig. 6 – Traditional representation for stability circles

 

while in the three-dimensional chart it is

Fig. 7 – Stability circles on the 3D chart

where both circles can be seen, a fraction in the northern hemisphere and the other one in the south. Thus, its representation has been greatly facilitated.

A PRACTICAL APPLICATION: LOW NOISE AMPLIFIER

Let’s see a practical application of the 3D chart matching the previous amplifier with the maximum stable gain and minimum figure of noise, at 3GHz. Using traditional methods, and knowing the transistor parameters which are the next

S11=0,82/-69,5   S21=5,66/113,8   S12=0,03/48,8  S22=0,72/-37,6

NFmin=0,62  Γopt=0,5/67,5 Rn=0,2

S-parameters are represented in the3D Smith’s chart and the stability circles are drawn. For a better representation 3 frequencies are used, with a 500MHz bandwidth.

Fig. 8 – S-parameters and stability circles for the transistor (S11 S21 S12 S22 Input Stability Circle Output Stability Circle)

It can be seen that S-parameters as well as the stability circles in both the conventional Smith’s chart and 3D one. In the conventional Smith’s chart, the stability circles leave the chart.

One amplifier is unconditionally stable when the stability circles are placed in the active impedance area of the chart, in the southern hemisphere, under two conditions: if the circles are placed in the active region and do not surround the passive one, the unstable impedances are located inside the circle. If the circles surround the passive region, the unstable impedances are located outside the circle.

.

Fig. 9 – Possible cases for stability circles in the active region

In this case, since part of the circles enters on the passive impedances region, the amplifier is conditionally stable.Then the impedances that could unstabilize the amplifier are placed inside the circles. This is something that cannot be seen clearly in the three-dimensional chart yet, the app does not seem to calculate it and would be interesting to include in later versions, because it would greatly facilitate the design.

Let’s match now the input for the minimum noise. For this, it is needed to design a matching network to transform from 50Ω to reflection coefficient Γopt, being its normalized impedance Zopt=0,86+j⋅1,07. In the app, opening the design window and writing this impedance

Fig. 10 – Representation of Γopt

Using now the admittance, we translate in the circle of constant conductance until the real part of the impedance is 1. This is down by estimation and a 0,5 subsceptance is got. It should be increased 0,5 – (- 0,57) = 1.07 and this is a shunt capacitor, 1,14pF.

Fig. 11 – Translating to circle with real part 1.

Now it is only needed to put a component that makes zero the reactance, when the resistance is constant. As the reactance is -1.09, the added value should be 1.09, so that the reactance is zero. This is equivalent to a series inductor, 2,9nH.

Fig. 12 – Source impedance matched to Γopt

Once calculated the input matching network for the lower noise figure, we recalculate the S-parameters. Being an active device, the matching network transforms the S parameters, which are:

S11=0,54/-177   S21=8,3/61,1   S12=0,04/-3,9  S22=0,72/-48,6

and which are represented in the Smith’s chart to get the stability circles.

Fig. 13 – Transistor with matching network to Γopt and stability circles.

The unstable regions are the internal regions, so the amplifier remains stable.

Now the output matching network is got for maximum stable gain, and the ouput reflection coefficient S22=0,72/-48,6 should be loaded by ΓL (S22  conjugate), translating from 50Ω to ΓL=0,72/48,6. This operation is performed in the same way that input matching network. By doing the complete matching , S parameters are recalculated, with input and oputput matching networks. These are

S11=0,83/145   S21=12/-7.5   S12=0,06/-72,5  S22=0,005/162

The gain is 20·log(S21)=21,6dB, and the noise figure, 0,62dB (NFmin). Now it is only represented these parameters in the three-dimensional chart to get the stability circles.

Fig. 14 – Low noise amplifier and stability circles

In this case, the stable region in the input stability circle is inside and in the otuput stabiliy circle is outside. Due to both reflection coefficients, S11 y S22 are into the stable regions, then the amplifier is stable.

CONCLUSIONS

In this entry I had the first contact with the three-dimensional Smith’s chart. The object was to study its potential with respect the traditional chart in microwave engineering. New advantages are observed in this respect in that it is possible to represent the infinite values ​​from the Möbius transform to a Riemann’s sphere and thus having a three-dimensional graphical tool where practically all passive and active impedances and parameters which can be difficult to draw in the traditional chart as stability circles.

In its version 1, the app, which can be found on the website 3D Smith Chart / A New Vision in Microwave Analysis and Design, shows some design options and configurations, although some applications should be undoubtedly added In future versions. In this case, one of the most advantageous applications for the chart, having studied the stability circles of an amplifier, is the location of the stability regions graphically. Although this can be solved by calculation, the visual image is always more advantageous.

The app has a user manual with examples explained in a simple way, so that the designer becomes familiar with it immediately. In my professional opinion, it is an ideal tool for those of us who are used to using Smith’s chart to perform our matching network calculations.

REFERENCIAS

  1. Müller, Andrei; Dascalu, Dan C; Soto, Pablo; Boria, Vicente E.; ” The 3D Smith Chart and Its Practical Applications”; Microwave Journal, vol. 5, no. 7, pp. 64–74, Jul. 2012
  2. Zelley, Chris; “A spherical representation of the Smith Chart”; IEEE Microwave, vol. 8, pp. 60–66, July 2007
  3. Grebennikov, Andrei; Kumar, Narendra; Yarman, Binboga S.; “Broadband RF and Microwave Amplifiers”; Boca Raton: CRC Press, 2016; ISBN 978-1-1388-0020-5

Statistical analysis using Monte Carlo method (II)

Art02_fig01

In the previous post, some single examples of the Monte Carlo method were shown. In this post it will be deeply analyzed, making a statistical analysis on a more complex system, analyzing its output variables and studying the results so that they will be quite useful. The advantage of simulation is that it is possible to get a random generation of variables, and also a correlation between variables can be set, achieving different effects in the analysis of the system performance. Thus, any system not only can be analyzed statistically using a random generation of variables, but also this random generation can be linked in a batch analysis or failures in production and in a post-production recovery.

The circuits studied in the previous post were very simple circuits, allowing to see the allocation of random variables and their results when these random variables are integrated a more complex system. With this analysis, it is possible to check the performance and propose corrections which would limit statistically the variations in the final system.

In this case, the dispersive effect of the tolerances will be studied on one of the circuits where it is very difficult to achieve an stability in its features: an electronic filter. An electronic filter, passband type, will be designed and tuned to a fixed frequency, with a certain bandwidth in passband and stopband, and several statistical analysis will be done on it, to check its response with the device tolerances.

DESIGN OF THE BANDPASS FILTER

A bandpass filter design is done, with a 37,5MHz center frequency, 7MHz pass bandwidth (return losses ≥14dB) and a 19MHz stopband bandwidth (stopband attenuation >20dB). When the filter is calculating, three sections are got, and its schematic is

Filtro paso banda de tres secciones

3-sections bandpass filter

With the calculated values of the components, standard values which can make the filter transfer function are found, and its frequency response is

Respuesta en frecuencia del filtro paso banda

Bandpass filter frequency response

where it is possible to check that the center frequency is 37.5 MHz, the return losses are lower than 14dB at ± 3.5Mhz of the center frequency, and the stopband width is 18,8MHz, with 8,5MHz from the left of the center frequency and 10,3MHz to the right of the center frequency.
Then, once the filter is designed, a first statistical analysis is done, considering that the capacitor tolerance is ± 5% and the inductors are adjustable. In addition, there is not any correlation between the random variables, being able to take an random value independently.

STATISTICAL ANALYSIS OF THE FILTER WITHOUT CORRELATION BETWEEN VARIABLES

As it could be seen in the previous post, when there are random variables there is an output dispersion, so limits to consider a valid filter must be defined, from these limits, to analyze its valid frequency response. Yield analysis is used. This is an analysis using the Monte Carlo algorithm that it allows  to check the performance or effectiveness of the design. To perform this analysis, the limits-for-validation specifications must be defined. The chosen specifications are return losses >13,5dB at 35÷40MHz, with a 2 MHzreduction in the passband width and an attenuation >20dB at frequencies ≤29MHz and ≥48MHz. By statistical analysis, it is got

Análisis estadístico del filtro. Variables sin correlación.

Statistical analysis of the filter . Variables without correlation.

whose response is bad: only 60% of possible filters generated by variables with a ±5% tolerance could be considered valid. The rest would not be considered valid by a quality control, which would mean that 40% defective material should be returned to the production, to be reprocessed.

It can be checked in the graph that the return loss are the primarily responsible for this bad performance. What could it be done to improve it? In this case, there are 4 random variables. However, two capacitors have of the same value (15pF), and when they are assembled in a production process, usually belong to the same manufacturing batch. If these variables show no correlation, variables can take completely different values. When they are not correlated, the following chart is got

Condensadores C1 y C3 sin correlación

C1, C3 without correlation

However, when these assembled components belong to the same manufacturing batch, their tolerances vary always to the same direction, therefore there is correlation between these variables.

STATISTICAL ANALYSIS OF THE FILTER WITH CORRELATION BETWEEN VARIABLES

When the correlation is used, the influence of tolerances is decreased. In this case, it is not a totally random process, but manufacturing batches in which the variations happen. In this case, it is possible to put a correlation between the variables C1 and C3, which have the same nominal value and belong the same manufacturing batch, so now the correlation graph is

Condensadores C1 y C3 con correlación

C1, C3 with correlation

where the variation trend in each batch is the same. Then, putting a correlation between the two variables allows studying the effective performance of the filter and get

Análisis estadístico con C1, C2 variables correladas

Statistical analysis with correlation in C1, C3

that it seems even worse. But what happens really? It must be taken into account that the variable correlation has allowed analyzing complete batches, while in the previous analysis was not possible to discern the batches. Therefore, 26 successful complete manufacturing processes have been got, compared to the previous case that it was not possible to discern anything. Then, this shows that from 50 complete manufacturing processes, 26 processes would be successful.

However, 24 complete processes would have to be returned to production with the whole lot. And it remains really a bad result. But there is a solution: the post-production adjustment.

STATISTICAL ANALYSIS WITH POST-PRODUCTION ADJUSTMENT

As it was said, at this point the response seems very bad, but remembering that the inductors had set adjustable. What happens now? Doing a new analysis, allowing at these variable to take values in ±10% over the nominal value, and setting the post-production optimization in the Monte Carlo analysis and voilà! Even with a very high defective value, it is possible to recover 96% of the filters within the valid values.

Análisis estadístico con ajuste post-producción

Statistical analysis with post-production optimization

So an improvement is got, because the analysis is showing that it is possible to recover almost all of the batches with the post-production adjustment, so this analysis allows showing not only the defective value but also the recovery posibilities.
It is possible to represent the variations of the inductors (in this case corresponding to the serial resonances) to analyze what is the sensitivity of the circuit to the more critical changes. This analysis allows to set an adjustment pattern to reduce the adjustment time that it should have the filter.

Análisis de los patrones de ajuste en las inducciones de las resonancias serie

Analysis of the adjustment patterns of the serial resonance inductors

So, with this analysis, done at the same time design, it is possible to take decisions which set the patterns of manufacturing of the products and setting the adjustment patterns for the post-production, knowing previously the statistic response of the designed filter. This analysis is a very important resource before to validate any design.

CONCLUSIONS

In this post, a more grade in the possibilities of using Monte Carlo statistical analysis is shown, using statistical studies. The algorithm provides optimal results and allows setting conditions for various analysis and optimizing more the design. Doing a post-production adjustment, it is possible to get the recovery grade of the proposed design. In the next post, another example of the Monte Carlo method will be done that allows seeing more possibilities over the algorithm.

REFERENCES

  1. Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
  2. Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid,  Alianza Editorial, 2001, ISBN 84-206-8696-4.
  3. Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.

Statistical analysis using Monte Carlo method (I)

imagesWhen any electronic device is designed, we can use several deterministic methods for calculating its main parameters. So, we can get the parameters that we measure physically in any device or system. These preliminary calculations allow the development and their results are usually agreed with the prediction. However, we know that everything we manufacture is always subject to tolerances. And these tolerances cause variations in the results that often can not be analyzed easily, without a powerful calculation application. In 1944, Newmann and Ulam developed a non-deterministic, statistical method called Monte Carlo. In the following blog post.  we are going to analyze the use of this powerful method for predicting possible tolerances in circuits, especially when they are manufactured industrially.

In any process, the output result is a function of the input variables. These variables generate a response which can be determined, both if the system is linear and if it is not linear. The relationship between the response and the input variables is called transfer function, and its knowledge allows us to get any result concerning the input excitation.

However, it must be taken in account that the input variables are random variables, with their own distribution function, and are subject to stochastic processes, although their behavior is predictable through the Theory of Probability. For example, when we make any measure, we get its average value and the error in which can be measured that magnitude. This allows to limit the environment in which it is correct and decide when the magnitude behaves incorrectly.

For many years, I have learned to successfully transform the results obtained by simulations in real physical results, with predictable behavior and I got valid conclusions, and I have noticed that in most cases the use of the simulation is reduced to get the desired result without studying the dependence of the variables in that result. However, most simulators have very useful statistical algorithms that, properly used, allow to get a series of data that the designer can use in the future, predicting any system behavior, or at least analyzing what it can happen.

However, these methods are not usually used. Either for knowledge lack of statistical patterns, or for ignorance of how these patterns can be used. Therefore, in these posts we shall analyze the Monte Carlo method on circuit simulations and we shall discover an important tool which is unknown to many simulator users.

DEVICES LIKE RANDOM VARIABLES

Electronic circuits are made by simple electronic devices, but they have a statistical behavior due to manufacturing. Device manufacturers usually show their nominal values and tolerances. Thus, a resistance manufacturer not only publishes its rating values and its dimensions. Tolerances, stress, temperature dependance, etc., are also published. These parameters provide important information, and propertly analyzed with a powerful calculation tool (such as a simulator), we can predict the behavior of any complex circuit.

In this post, we are going to analyze exclusively the error environment around the nominal value, in one resistor. In any resistor, the manufacturer defines its nominal value and its tolerance. We asume these values 1kΩ for the nominal value and ± 5% for its tolerance. It means the resistance value can be found between 950Ω and 1,05kΩ. In the case of a bipolar transistor, the current gain β could take a value between 100 and 600 (i.e. NXP BC817), which may be an important and uncontrollable variation of current collector. Therefore, knowing these data, we can analyze the statistical behavior of an electronic circuit through the Monte Carlo method.

First, let us look resistance: we have said that the resistance has a ± 5% tolerance. Then, we will analyze the resistor behavior with the Monte Carlo method, using a circuit simulator. A priori, we do not know the probability function, although most common is a Gaussian function, whose expression is well known

normal

being μ the mean and σ² the variance. Analyzing by the simulator, through Monte Carlo method and with 2000 samples, we can get a histogram of resistance value, like it is shown in the next figure

Distribución de los valores de la resistencia usando el análisis de Monte Carlo

Histogram of the resistor

Monte Carlo algorithm introduces a variable whose value corresponds to a Gaussian distribution, but the values it takes are random. If these 2000 samples were taken in five different processes with 400 samples each one, we would still find a Gaussian tendency, but their distribution would be different

Distribuciones gaussianas con varios lotes

Gaussian distributions with different processes

Therefore, working properly with the random variables, we can get a complete study of the feasibility of any design and the sensitivity that each variable shows. In the next example, we are going to analyze the bias point of a bipolar transistor, whose β variation is between 100 and 600, being the average value 350 (β is considered a Gaussian distribution), feeding it with resistors with a nominal tolerance of ± 5% and studying the collector current variation using 100 samples.

STATISTICAL ANALYSIS OF A BJT BEHAVIOR IN DC

Now, we are going to study the behavior of a bias circuit, with a bipolar transistor, like the next figure

Circuito de polarización de un BJT

Bias point circuit of a BJT

where the resistors have a ±5% tolerance and the transistor has a β variation between 100 and 600, with a nominal value of 350. Its bias point is  Ic=1,8mA, Vce=3,2V. Making a Monte Carlo analysis, with 100 samples, we can get the next result

Variación de la corriente del BJT en función de las variables aleatorias

BJT current distribution respect to the random variables

 

Seeing the graph form, we can check that the result converges to a Gaussian distribution, being the average value Ic=1,8mA and its tolerance, ±28%. Suppose now that we do the same sweep before processing, in several batches of 100 samples each one. The obtained result is

Variación de la corriente del BJT para varios lotes

BJT current distribution respect several batches

where we can see that in each batch we get a graph which converges to a Gaussian distribution. In this case, the Gaussian distribution has an average value μ=1,8mA and a variance σ²=7%. Thus, we have been able to analyze each process not only like a global statistical analysis but also like a batch. Suppose now that β is a random variable with an uniform distribution function, between 100 and 600. By analyzing only 100 samples, the next graphic is got

Distribución con b uniforme

Results with a BETA uniform distribution

and it can be seen that the current converges to an uniform distribution, increasing the current tolerance range and the probability at the ends. Therefore, we can also study the circuit behaviour when it shows different distribution functions for each variable.

Seeing that, with the Monte Carlo method, we are able to analyze any complex circuit behavior in terms of tolerances, in the same way it will help us to study how we could correct those results. Therefore, in the next posts we shall analyzed deeply this method, increasing the study of its potential and what we can be achieved with it.

CORRECTING THE TOLERANCES

In the simulated circuit, when we have characterized the transistor β like an uniform random variable, we have increased the probability into unwanted current values (at the ends). This is one of the most problematic features, not only on bipolar transistors but also on field effect transistor: the variations of their current ratios. This simple example let see what happens when we use a typical correction circuit for the β variation, like the classic polarization by emitter resistance.

Bias circuit by emitter resistance

Using this circuit and analyzing by Monte Carlo, we can compare its results with the analysis obtained in the previous case, but using 1000 samples. The result is

Resultados con ambos circuitos

Results with both circuits

where we can check that the probability values have increased around 2mA, reducing the probability density at the low values of current and narrowing the distribution function. Therefore, the Monte Carlo method is a method that not only enables us to analyze the behavior of a circuit when subjected to a statistical, but also allow us to optimize our circuit and adjust it to the desired limit values. Used properly, it is a powerful calculation tool that will improve the knowledge of our circuits.

CONCLUSIONS

In this first post, we wish to begin a serie dedicated to Monte Carlo method. In it, we wanted to show the method and its usefulness. As we have seen in the examples, the use of Monte Carlo method provides very useful data, especially with the limitations and variations of the circuit we are analyzing if we know how they are characterized. On the other hand, it allows us to improve it using statistical studies, in addition to setting the standards for the verification of in any production process.

In the next posts we shall go more in depth on the method, by performing a more comprehensive method through the study of a specific circuit of one of my most recent projects, analyzing what the expected results and the different simulations that can be performed using the method of Monte Carlo, like the worst case, the sensitivity, and the post-production optimization.

REFERENCES

  1. Castillo Ron, Enrique, “Introducción a la Estadística Aplicada”, Santander, NORAY, 1978, ISBN 84-300-0021-6.
  2. Peña Sánchez de Rivera, Daniel, “Fundamentos de Estadística”, Madrid,  Alianza Editorial, 2001, ISBN 84-206-8696-4.
  3. Kroese, Dirk P., y otros, “Why the Monte Carlo method is so important today”, 2014, WIREs Comp Stat, Vol. 6, págs. 386-392, DOI: 10.1002/wics.1314.

Tin whiskers growing on Zamak alloys

whiskers

From today, some interesting entries for readers will be published in English. This first entry will explain the physical reasons because tin whiskers are generated on surfaces of copper or zinz, and methods to prevent this phenomenon.

Whiskers are a small tin wires, growing due to differences in surface tension at the binding surface of the metals, when an electrochemical plating is applied. In 2006, the author and his R&D team had to research this phenomenon. Researchers found the functionality of one product is spoiled along the time. Particularly, when it was stored more than three months. Then, we decided to study this phenomenon, to understand the causes because it produces and find possible solutions to prevent in future developments.

INTRODUCTION

Occurrence of uncontrolled phenomena is important for the Research & Development in any factory. Most of the time, private companies applied more the Development than the Research in their products. However, there are many times where the development has drawbacks and phenomena that are not in the company “know how”. These phenomena enable R&D teams to acquire new knowledge and apply it in the future.

In 2006, my R&D team found a phenomenon affecting the proper operation of one popular product. It was completely unknown to us, but experienced by others: whiskers. It occurred in one important product that we were developing. Because this product was the most important in our catalog, forcing us to do a deeper research, to find a solution, because there were stored material which might be defective. So my R&D team got to work to solve this phenomenon.

Whiskers are a type of mammalian hair, with large size. In Mechanical Engineering, whiskers are tin filaments that grow on a material that has been processing by electroplating. Electroplating is used in the industry, because it serves for fine finishes, easy weldability or protect to corrosion. In our case, electroplating is made with tin and zamak (a zinz, magnesium, aluminum and copper alloy, widely used in industrial housings), to facilitate the weldability on the zamak (it is not weldable), and provide a well-finished product. Therefore, the knowledge of the phenomenon and the possible solutions was very important for us.

TIN WHISKERS ON ZAMAK ALLOY

This phenomenon appeared on Zamak housings, because they must submit a tin plating to weld on the housings, because Zamak does not allow conventional welding.

A drawback happened when, after a large time stored material, this product,  a narrow-band amplifier with an 8MHz cavity filter, had a strong deviations in its electrical features. This phenomenon forced to do a new resetting of the filter. In this product, there were two separate settings: first, made during the assembly, and second, 24 hours from the first adjustment. After completing both settings, the cavity filter usually remained stable, but a third setting was recommended if the product was stored over 3 months (storage rotation).

However, along the product development, my R&D team discovered that the cavity filter was not stable and the failure of the selectivity and insertion losses grew along the time. Implying that, despite the third adjustment, could not ensure the filter stability. This failure meant that we could not ensure that the filter was stable in spite of third adjustment.

Tin whiskers growth

At first, the phenomenon looked like a failure in the electronic components, caused by a defective lot of capacitors. Then it became a new phenomenon for us: we had accidentally generated whiskers on tin surface.

As I said, whiskers are metal filament-like crystals, which grow on the surface of tin which covers the zamak housing. Crystals are so fine that they are very brittle when the hand is passed over the surface and melt when a short circuit current crosses through them, which it does not have to be very high. In this case, it reduced the cavity volume of the filter, and it changed its resonant frequency, moving the insertion response to higher frequencies.

When we start to study this phenomenon, we discovered that it had been known since the 40’s and even NASA studied deeply the phenomenon, so that part of the way was made: we verified that it was associated with the type of contact surface between the two materials and the thickness applied to the tin plating. Surface tension of the materials and storage temperature were involved, too. In summary, whiskers growth was ruled under the equations of PhD. Irina Boguslavsky and her collaborator Peter Bush:

whiskers

Whiskers growth equations

According to the experimental observations, both equations predicted rather accurately the whiskers growth which was observed in the tin layers. In these equations, σ represents the stress strength, related to the surface tension. LW is related to thickness of the surface bonding and n is a value, dependent on the displacement density and the temperature, T. The k1, k2 and k3 terms are constants which depend on the material properties and RW is the radius of the filament. The h1 and h2 terms refer to the filament growth where it has already happened in the bonding area (h1) and the time at which happens (h2).

Crecimiento del filamento de estaño a los 3 y a los 6 meses

Tin whisker growth at the 3rd and 6th month

In these equations, when LW decreases, h2 increases, because it is an exponential function with n>>1. Therefore, the plating thickness is one of the variables which can be controlled. In our case, this thickness had been decreased from 20μm to 6-8μm because the development incorporated a “F” plug-type, threading, instead of the former 9 ½mm DIN connector. Since the connectors were made in the molding manufacture and they were subsequently threaded, they were made before the tin plating. A 20μm tin plating did not allow that the connectors were threaded.

The σ term is related to the surface tension in the junction and depends only on the materials used. Studying with the plating manufacturer for different thicknesses, we verified that the expressions were consistent, since for larger thicknesses, the growth was always much higher than for smaller thicknesses, and there was always a tendency to grow, although it was lower in 20μm platings. Once the plating was made, the stress forces which were applied by the surface tension of zamak, “pushed” to the tin atoms outwards, to maintain the equilibrium conditions. Opposite to them, the surface tension of tin appeared. With less tin thickness, the forces applied at the contact surface were higher than the opposite forces on the tin surface, and with less tin thickness, internal forces which opposed to the surface forces were weaker, allowing the whisker growth outside.

POSSIBLE SOLUTIONS TO WHISKERS GROWTH

One solution, that was provided from Lucent Technologies, was performing an intermediate nickel plating, between zinz surface and the tin plating.

Ni plating between Sn and Zn surfaces

Researchers from Lucent Tech., after several experiments, found that the growth of whiskers was eliminated significantly, to nearly zero values.

Crecimiento de ambos tipos de baño de estaño (brillante y con antimonio).

Growth in the two types of tin plating (bright tin and satin tin)

In the graphs, we can see that the growth of bright tin over a copper surface which has a similar performance to the zamak. It grows rapidly after 2 months. The growth slope is very high in bright tin. However, when it is applied an intermediate Ni layer, the growth is practically zero. For the satin tin, the growth slope happens after 4 months, and it shows a slightly lower slope. After applying the Ni layer, the growth is practically zero.

The thickness of the Ni plating could be between 1μm and 2μm, while the thickness of tin plating could be maintained around 8μm. Thus, the defective threading is avoided while the whiskers were removed. However, the process was quite expensive, so this option was discarded.

Therefore, we were confronting another problem: how to eliminate the phenomenon, which implied increasing the thickness of the tin plating on the zamak alloy, but also caused the defective in threading at “F” connector. A molding modification, to provide more material on the connector, was quite expensive and involved a larger modification time, having to include inserts. However, it was right to correct the whiskers.

Another problem was raised with the stored material and the material which was being manufactured. The stored material could not be reprocessed because it had been assembled and could not be plated again. The intermediate solution was to remove the tin crystals by cleaning with compressed air.

About the material in manufacturing process (non-plating pieces), a temporary solution was to replace the tin plating. Silver plating was applied. Silver is weldable and can be applied in very thin layers, keeping the features, but it has the disadvantage that its oxide shows a dirty and stained finish, affecting to the product esthetics.

Finally, in-depth study of the phenomenon laid down that increasing the tin thickness should be standard. Defective in threading should be removed by a tool, to make correctly the threading on the connector, and the mold modification could be made, by modifying the inserts of the threaded connectors, to get a 10-20μm plating which do not fill the threadings.

CONCLUSIONS

Tin whiskers is a little-known phenomenon. It happens at the microscopic level and seems to have only been studied by agencies and national research laboratories, with strong budgets and appropriate means for its observation.

In Spain, we have found few laboratories which study it. It happens preferably in the industry, caused by the handling of materials. This job was done by my R&D team, allowing us to acquire enough knowledge to correct and prevent it, as well as to avoid its occurrence again.

However, there are many items about it on the web, which allowed us to know, analyze its causes and possible solutions.

References

  1. H. Livingston, “GEB-0002: Reducing the Risk of Tin Whisker-Induced Failures in Electronic Equipment”; GEIA Engineering Bulletin, GEIA-GEB-0002, 2003
  2. B. D. Dunn, “Whisker formation on electronic materials”, Circuit World, vol. 2, no. 4, pp.32 -40 1976
  3. R. Diehl, “Significant characteristics of Tin and Tin-lead contact electrodeposits for electronic connectors”, Metal Finish, pp.37-42 1993
  4. D. Pinsky and E. Lambert, “Tin whisker risk mitigation for high-reliability systems integrators and designers”, Proc. 5th Int. Conf. Lead Free Electronic Components and Assemblies, 2004
  5. Chen Xu, Yun Zhang, C. Fan and J. Abys, “Understanding Whisker Phenomenon: Driving Force for Whisker Formation”, Proceedings of IPC/SMEMA Council APEX, 2002
  6. I. Boguslavsky and P. Bush, “Recrystallization Principles Applied to Whisker Growth in Tin”, Proceedings of IPC/SMEMA Council APEX, 2003