Archivo de la categoría: Materials

Basis of Microwave Heating

Microwave oven has become very popular in recent years, and has become an essential appliance in any kitchen. However, microwave heating seems an esoteric, almost magical, issue for many people who have the oven at their home. In this post we are going to explain the basis of microwave heating, not only for the food heating, but also for industrial heating and HDW (hot domestic water).

In 1946, a British researcher from the Raytheon Corporation, Mr. Percy Spencer, working on RADAR applications, discovered that a candy bar in his pocket was melted. He was testing a magnetron and began experimenting, confining the EM field inside a metal cavity. He tested first with corn and then with a chicken egg. This latter one exploded.

He verified that a high intensity EM field affected food due to the presence of water inside. Water is a bad propagator of radio waves, because it has a high dielectric constant and losses. Being a polar molecule, when a variable EM field is applied, the dipole tends to be oriented in the direction of the field, and that makes the water molecule is agitated, increasing its temperature. The popular belief is that this only happens at 2,4 GHz, but it actually happens throughout the microwave band. This frequency is used by the ovens because it is a frequency within a free emission band known as ISM (short for Industrial, Scientific and Medical). However, there are heating processes at 915MHz and another frequencies..

First, the water, like almost all dielectrics, has under normal conditions a complex dielectric constant ε=ε’−jε”. When this complex dielectric constant is introduced into the Maxwell equations, the complex term means a dielectric conductivity, by the next expression

\sigma = \omega \epsilon" \epsilon_0

This conductivity is not produced by the mobility of electrons, but by the mobility of the polar molecules of water. Therefore, it is higher as the frequency is increased.

On the other hand, the presence of this conductivity limits the microwaves penetration in the water, attenuating the EM intensity with distance. It is related to the depth of penetration, expressed by

\delta_p=\dfrac {\lambda \sqrt{\epsilon'}}{2 \pi \epsilon"}

and therefore at higher frequency, lower penetration depth. If the intensity of the electric field is |E|, and, by the Ohm’s law, the volumetric power is given by

Q=\omega \epsilon" \epsilon_0 |E|^2

This volumetric power will affect a specific region of the water, causing heating.

On the other hand, there is a heat transfer effect due to thermal conductivity, such that the surface heat flux is

\dfrac {dQ_s}{dt}=-k \displaystyle \int_s {\vec \nabla T d \vec S}

Applying the divergence theorem, the variation of heat per unit volume will be

\dfrac {dQ_V}{dt}=-k \nabla^2 T

This flow distributes the temperature inside the volumetric element, lossing energy, and therefore its sign is negative.

WATER HEATING

Under macroscopic conditions, the energy per unit volume that must be applied to water increasing its temperature is given by

E_v=\rho_m c_e \Delta T

with ρM the water density and ce its specific heat, being ΔT the increasing of the temperature. Speaking in terms of power, we will have to

Q=\rho_m c_e \dfrac{dT}{dt}

where it must be calculate the global time variation of the temperature, and being a fluid that can be in movement, it must be applied the material derivative, an operator that includes the time variation and the convection. Applying this operator we may get

\dfrac{dT}{dt}=\dfrac{\partial T}{\partial t}+\vec v \vec \nabla T

and the volumetric power is given by

Q=\rho_m c_e \left(\dfrac{\partial T}{\partial t}+\vec v \vec \nabla T \right)-k\nabla^2 T

which is the expression that governs the water heating when a volumetric density of EM power Q is applied.

On the other hand, fluid movement is governed by the Navier-Stokes equations, through

\rho_M \dfrac {\partial \vec v}{\partial t}=-\vec \nabla P+\mu \nabla^2 \vec v + \rho_M \vec g

Where P is the volumetric pressure, μ the fluid viscosity and g the gravitational field.

HDW SYSTEMS USING MICROWAVES

In the case of a hot domestic water system, there would be two possibilities of heating:

  1. Through a closed circuit system moving a water flow, due to its very low viscosity (10-3 Pa·s).
  2. Using a vessel with rest water and accumulating the heat to transmit it to another areas.

In the first case, the volumetric power necessary to heat a closed circuit system must solve both with the thermal variation and the Navier-Stokes equations, and its efficiency is greater than in the second one, where the expression of the thermal increase is given by

Q+k\nabla^2 T=\rho_m c_e \dfrac{\partial T}{\partial t}

This equations can be solved using the FEM method, as we saw in the post about the simulation.

In any case, although both methods are possible, the first method will always be cheaper than the second, since the second can only be applied to raise the temperature of another fluid in motion and will need more energy due to the losses due to that transfering of heat..

IS IT POSSIBLE TO HEAT OTHER MATERIALS USING MICROWAVES?

Normally, any material that has losses by dielectric constant can be capable of being heated using microwaves, if these losses do not raise the electrical conductivity to values that cancel the electric field (in a perfect conductor, the electric field is zero). If we write the expression obtained in terms of electric field we get

\omega \epsilon" \epsilon_0 |E|^2+k\nabla^2 T=\rho_m c_e \left(\dfrac{\partial T}{\partial t}+\vec v \vec \nabla T \right)

and therefore, we can obtain a relationship between ε” and the increase of temperature at a given electric field |E|.

INFLUENCE ON THE HUMANS

The human body is another dielectric which contains mostly by water. Therefore, the effect of the EM radiation on our body should cause heating. Let’s study what would be the field that would increase our temperature above 50o C in one minute, reducing the expressions to

\omega \epsilon" \epsilon_0 |E|^2=\rho_m c_e \dfrac{\Delta T}{\Delta t}

Taking ε”=4,5 (water at 2,4 GHz), knowing that the average human density is 1100 kg/m3 and its specific heat, 14,23 kJ/kg o C, it is got the next

|E|=\sqrt {\dfrac {1100 \cdot 14230 \cdot \left(\dfrac{50-33}{60} \right)}{2 \pi \cdot 2,4 \cdot 10^9 \cdot 4,5 \cdot 8,85 \cdot 10^{-12}}}=3,1 kV/m

and a WIFI router emits with less than 2 V/m field strength at 1 m. of distance. Therefore, a WIFI router will not cause heating in our body or even if we are close by it..

And a mobile phone? These devices are already powerful … Well, at its emission peak either, since at most it will emit with 12 V/m, and we need 3100 V/m, about 260 times more. So the mobile does not warm our ear either. And keeping in mind the depth of penetration, as much the EM radiation gets to penetrate about 2 cm, attenuating the field strength in half and power to the fourth part, due to the dielectric conductivity of our body. That without keeping in mind that each of our tissues has a different attenuation capacity depending on its composition and structure.

CONCLUSION

This post tries to explain the microwave heating phenomenon based on the ones that produce this heating, and its possible industrial applications, apart from those already known as the popular oven that almost every kitchen already has as part of its home appliance furniture. One of the most immediate applications is in the HDW, although applications have also been achieved in other industrial areas. And although the microwaves produce that heating, the necessary field strengths are very far from the radiation we receive from mobile communications.

REFERENCES

  1. Menéndez, J.A., Moreno, A.H. “Aplicaciones industriales del calentamiento con energía microondas”. Latacunga, Ecuador: Editorial Universidad Técnica de Cotopaxi, 2017, Primera Edición, pp 315. ISBN: 978-9978395-34-9
  2. D. Salvi, Dorin Boldor, J. Ortego, G. M. Aita & C. M. Sabliov “Numerical Modeling of Continuous Flow Microwave Heating: A Critical Comparison of COMSOL and ANSYS”, Journal of Microwave Power and Electromagnetic Energy, 2016, 44:4, 187-197, DOI: 10.1080/08327823.2010.11689787

Simulation on Physical Systems

I take a long time writing many post about the simulation. Main reason is because I have learned for many years the value of using computers for physical system analysis. Without these tools, I would never be able to get reliable results, because of the amount of calculations I would have to do. Modern simulators, able to solve complex calculations using the computers capacity, allow us to get a more realistic behavior for a complex system, knowing its structures. Physics and Engineering work every day with simulations to get better predictions and take decisions. In this post, I am going to show what are the most important parts we should be kept in mind about the simulation.

In 1982, physicist Richard Feynman published an article where he talked about the analysis of physical systems using computers (1). In those years, computer technology had progressed to a high level that it was possible to achieve a greater calculation capacity. New programming languages worked with complex formulas, such as FORTRAN, and allowed the calculations on systems by complex integro-differential equations, which resolution usually needed numerical methods. So, in those first years, physicists began to do simulations with programs able to solve the constitutive system equations, although not always with simple descriptions.

A great step forward in electronics was the SPICE program, at the beginning of 70s (2). This program, FORTRAN-based, was able to compute non-linear electronic circuits, removing the radiation effects, and solve the time-domain integral-differential equations. Over the years, the Berkeley’s SPICE became the first reference on simulation programs and its success being such that almost all the simulation programs developed along last years have its base on the Nagel and Pederson algorithms, developed in 70s.

From 80s, and searching to solve three-dimensional problems, the method of moments (MoM) was developed. It was come to solve systems raised as integral equations in the boundaries (3), being very popular. It was used in Fluid Mechanics, Acoustic Waves and Electromagnetism. Today, this one is still used to solve two-dimensional electromagnetic structures.

But the algorithms have got a huge progress, with the emergence of new finite element methods (FEM, frequency-domain) and time-domain finite differences (FDTD, time-domain) in 90s, based on the resolution of systems formulated by differential equations, important benchmarks on the generation of new algorithms able to solve complex systems (4). And with these new advances, the simulation contribution in Physics came to take spectacular dimensions.

WHAT IS THE VALUE OF AN ACCURATE MODEL?

When we are studying any physical phenomenon, we usually invoke a model. Whether an isolated phenomenon or within an environment, whether in Acoustic Waves, Electromagnetism or Quantum Mechanics, having a well-characterized model is essential to get its behavior, in terms of its variables. Using an accurate model increases our certainty on the results.

However, modeling is complex. It is needed to know what are the relationships between variables and from here, determine a formulation system that defines the behavior within a computer.

A model example is a piezoelectric material. In Electronics, piezoelectric materials are commonly used as resonators and it is usually to see these electronic devices (quartz or any other resonant material based on this property).

A piezoelectric model, very successful in the 40s, was developed by Mason (5). Thanks to the similarity between the Electromagnetic and Acoustic waves, he got to join both properties using transmission lines, based in the telegraphist’s equations, writing the constitutive equations. In this way, he developed a piezoelectric model which is still used today. This model can be seen in Fig. 1 and it has already been studied in previous posts.

Fig.1 - Mason Model

Fig.1 – Modelo de piezoeléctrico de Mason

This model practically solved the small signal analysis in frequency domain, getting an impedance resonance trace as it is shown in Fig. 2

Fig.2 – Resultados del análisis del modelo de Mason

However, the models need to expand their predictive capacity.

The Mason model describes the piezoelectric behavior rightly when we are working in a linear mode. But it has faults when we need to know the large signal behavior. So new advances in the piezoelectric material studies included the non-linear relationships in its constitutive equations (6).

Fig. 3 – Modelo tridimensional de una inducción

In three-dimensional models, we must know well what are the characteristics that define the materials to have an optimal results. In the induction shown in Fig. 3, CoFeHfO is being used as a magnetic material. It has a frequency-dependent complex magnetic permeability that must be defined in the libraries.

The results will be better as the model is defined better, and this is the fundamental Physicist task: getting a reliable model from the studies on the phenomena and the materials.

The way to extract a model is usually done by direct measurement or through the derived magnitudes, using equations systems. With a right model definition, the simulation results will be more reliable.

ANALYSIS USING SIMULATION

Once the model is rightly defined, we can perform an analysis by simulation. In this case, we will study the H-field inside the inductor, at 200 MHz, using the FEM analysis, and we are going to draw this one, being shown in Fig. 4.

Fig. 4 – Excitación magnética en el interior del inductor

The result is drawn in a vector mode, since we have chosen that representation to see the H-field direction inside the inductor. We can verify, first, that the maximum H-field is inside the inductor, to the positive section on Y axis in the upper area, while in the lower part the orientation the inverse. The maximum H-field level obtained is 2330 A/m with 1 W excitation between the inductor electrodes.

The behavior is precisely that of an induction whose value can also be estimated by calculating its impedance and drawiing it on Smith’s chart, Fig. 5.

Fig. 5 – Impedancia del inductor sobre carta de Smith

The Smith’s chart trace clearly shows an inductive impedance, which value decreases when the frequency increases, because of losses of the CoFeHfO magnetic material. Besides, these losses contribute to the resistance increasing with frequency. There will be a maximum Q in the useful band

Fig. 6 – Factor de calidad del inductor

Having a induction with losses a quality factor Q, we can draw it as a function of the frequency in Fig. 6.

Therefore, with the FEM simulation we have been able to analyze the physical parameters on a modeled structure that would have cost us much more time and effort to get by means of complex calculations and equations. This shows, as Feynman pointed out in that 1982 conference, the simulation powerful when there are accurate models and proper software to perform these analyzes.

However, the simulation has not always had the chance to get the best results. Precisely is the previous step, the importance of having an accurate model, which faithfully defines the physical behavior of any structure, which will ensure the reliability of the results.

EXPERIMENTAL RESULTS

The best way to check if the simulation is valid is to resort getting experimental results. Fortunately, the simulation performed on the previous inductor is got from (7), and, in this reference, the authors show experimental results that validate the results of the inductor model. In Fig. 7 and 8 we can see the inductance and resistance values, and adding the quality factor, can be compared with the experimental results of the authors.

Fig. 7 – Valor de la inductancia en función de la frecuencia

Fig. 8 – Valor de la resistencia efectiva en función de la frecuencia

The results obtained by the authors, using HFSS for the simulation of the inductor, can be seen in Fig. 9. The authors have done the simulation on the structure with and without core, and show the simulation against the experimental result . Seeing the graphs, it can be concluded that the results got in the simulation have a high level of concordance with those obtained through the experimental measurements.

This shows us that the simulation is effective when the model is reliable, and that a model is accurate when the results obtained through the simulation converge with the experimental results. In this way, we have a powerful analysis tool that will allow us to know in advance the behavior of a structure and make decisions before moving on to the prototyping process.

Fig. 9 – Resultados experimentales

In any case, convergence is also important in a simulation. The FEM simulation needs that the mesh is so accurate as getting a good convergence. A low convergence level gives results far from the optimum, and very complex structures require a lot of processing speed, a high RAM use and, sometimes, must even perform a simulation on several processors. To more complex structures, the simulation time increases considerably, and that is one of its main disadvantages.

Although the FEM simulators allow the optimization of the values ​​and even today the integration with other simulators, they are still simulators that require, due to the complexity of the calculations to be carried out, powerful computers that allow to make those calculations with reliability.

CONCLUSIONS

Once again, we agree with Feynman when, in that 1982 seminar, he chose precisely a topic which seemed to have no interest for the audience. Since that publication, Feynman’s article has become a classic of Physics publications. The experience that I have got over the years with several simulators, shows me that the way opened by them will have a considerable advance when quantum computers are a reality and their processing speed raises, allowing that these tools get reliable results in a short space of time.

The simulation in the physical systems has been an important progress to get results without needing to realize previous prototypes and supposes an important saving in the research and development costs.

REFERENCES

  1. Feynman, R; “Simulating Physics with Computers”; International Journal of Theoretical Physics, 1982, Vols. 21, Issue 6-7, pp. 467-488, DOI: 10.1007/BF02650179.
  2. Nagel, Laurence W. and Pederson, D.O. “SPICE (Simulation Program with Integrated Circuit Emphasis)”, EECS Department, University of California, Berkeley, 1973, UCB/ERL M382.
  3. Gibson, Walton C., “The Method of Moments in Electromagnetics”, Segunda Edición, CRC Press, 2014, ISBN: 978-1-4822-3579-1.
  4. Reddy, J.N, “An Introduction to the Finite Element Method”, Segunda Edición,  McGraw-Hill, 1993, ISBN: 0-07-051355-4.
  5. Mason, Warren P., “Electromechanical Transducers and Wave Filters”, Segunda Edición, Van Nostrand Reinhold Inc., 1942, ISBN: 978-0-4420-5164-8.
  6. Dong, S. Shim and Feld, David A., “A General Nonlinear Mason Model of Arbitrary Nonlinearities in a Piezoelectric Film”, IEEE International Ultrasonics Symposium Proceedings, 2010, pp. 295-300.
  7. Li, LiangLiang, et al. 4, “Small-Resistance and High-Quality-Factor Magnetic Integrated Inductors on PCB”, IEEE Transactions on Advanced Packaging, Vol. 32, pp. 780-787, November 2009, DOI: 10.1109/TADVP.2009.2019845.

Tin whiskers growing on Zamak alloys

whiskers

From today, some interesting entries for readers will be published in English. This first entry will explain the physical reasons because tin whiskers are generated on surfaces of copper or zinz, and methods to prevent this phenomenon.

Whiskers are a small tin wires, growing due to differences in surface tension at the binding surface of the metals, when an electrochemical plating is applied. In 2006, the author and his R&D team had to research this phenomenon. Researchers found the functionality of one product is spoiled along the time. Particularly, when it was stored more than three months. Then, we decided to study this phenomenon, to understand the causes because it produces and find possible solutions to prevent in future developments.

INTRODUCTION

Occurrence of uncontrolled phenomena is important for the Research & Development in any factory. Most of the time, private companies applied more the Development than the Research in their products. However, there are many times where the development has drawbacks and phenomena that are not in the company “know how”. These phenomena enable R&D teams to acquire new knowledge and apply it in the future.

In 2006, my R&D team found a phenomenon affecting the proper operation of one popular product. It was completely unknown to us, but experienced by others: whiskers. It occurred in one important product that we were developing. Because this product was the most important in our catalog, forcing us to do a deeper research, to find a solution, because there were stored material which might be defective. So my R&D team got to work to solve this phenomenon.

Whiskers are a type of mammalian hair, with large size. In Mechanical Engineering, whiskers are tin filaments that grow on a material that has been processing by electroplating. Electroplating is used in the industry, because it serves for fine finishes, easy weldability or protect to corrosion. In our case, electroplating is made with tin and zamak (a zinz, magnesium, aluminum and copper alloy, widely used in industrial housings), to facilitate the weldability on the zamak (it is not weldable), and provide a well-finished product. Therefore, the knowledge of the phenomenon and the possible solutions was very important for us.

TIN WHISKERS ON ZAMAK ALLOY

This phenomenon appeared on Zamak housings, because they must submit a tin plating to weld on the housings, because Zamak does not allow conventional welding.

A drawback happened when, after a large time stored material, this product,  a narrow-band amplifier with an 8MHz cavity filter, had a strong deviations in its electrical features. This phenomenon forced to do a new resetting of the filter. In this product, there were two separate settings: first, made during the assembly, and second, 24 hours from the first adjustment. After completing both settings, the cavity filter usually remained stable, but a third setting was recommended if the product was stored over 3 months (storage rotation).

However, along the product development, my R&D team discovered that the cavity filter was not stable and the failure of the selectivity and insertion losses grew along the time. Implying that, despite the third adjustment, could not ensure the filter stability. This failure meant that we could not ensure that the filter was stable in spite of third adjustment.

Tin whiskers growth

At first, the phenomenon looked like a failure in the electronic components, caused by a defective lot of capacitors. Then it became a new phenomenon for us: we had accidentally generated whiskers on tin surface.

As I said, whiskers are metal filament-like crystals, which grow on the surface of tin which covers the zamak housing. Crystals are so fine that they are very brittle when the hand is passed over the surface and melt when a short circuit current crosses through them, which it does not have to be very high. In this case, it reduced the cavity volume of the filter, and it changed its resonant frequency, moving the insertion response to higher frequencies.

When we start to study this phenomenon, we discovered that it had been known since the 40’s and even NASA studied deeply the phenomenon, so that part of the way was made: we verified that it was associated with the type of contact surface between the two materials and the thickness applied to the tin plating. Surface tension of the materials and storage temperature were involved, too. In summary, whiskers growth was ruled under the equations of PhD. Irina Boguslavsky and her collaborator Peter Bush:

h_1=k_1 \dfrac {\sigma}{R_W T}

h_2=k_2 \left( {\sigma}- \dfrac {k_3}{L_W} \right)^n

According to the experimental observations, both equations predicted rather accurately the whiskers growth which was observed in the tin layers. In these equations, σ represents the stress strength, related to the surface tension. LW is related to thickness of the surface bonding and n is a value, dependent on the displacement density and the temperature, T. The k1, k2 and k3 terms are constants which depend on the material properties and RW is the radius of the filament. The h1 and h2 terms refer to the filament growth where it has already happened in the bonding area (h1) and the time at which happens (h2).

Crecimiento del filamento de estaño a los 3 y a los 6 meses

Tin whisker growth at the 3rd and 6th month

In these equations, when LW decreases, h2 increases, because it is an exponential function with n>>1. Therefore, the plating thickness is one of the variables which can be controlled. In our case, this thickness had been decreased from 20μm to 6-8μm because the development incorporated a “F” plug-type, threading, instead of the former 9 ½mm DIN connector. Since the connectors were made in the molding manufacture and they were subsequently threaded, they were made before the tin plating. A 20μm tin plating did not allow that the connectors were threaded.

The σ term is related to the surface tension in the junction and depends only on the materials used. Studying with the plating manufacturer for different thicknesses, we verified that the expressions were consistent, since for larger thicknesses, the growth was always much higher than for smaller thicknesses, and there was always a tendency to grow, although it was lower in 20μm platings. Once the plating was made, the stress forces which were applied by the surface tension of zamak, “pushed” to the tin atoms outwards, to maintain the equilibrium conditions. Opposite to them, the surface tension of tin appeared. With less tin thickness, the forces applied at the contact surface were higher than the opposite forces on the tin surface, and with less tin thickness, internal forces which opposed to the surface forces were weaker, allowing the whisker growth outside.

POSSIBLE SOLUTIONS TO WHISKERS GROWTH

One solution, that was provided from Lucent Technologies, was performing an intermediate nickel plating, between zinz surface and the tin plating.

Ni plating between Sn and Zn surfaces

Researchers from Lucent Tech., after several experiments, found that the growth of whiskers was eliminated significantly, to nearly zero values.

Crecimiento de ambos tipos de baño de estaño (brillante y con antimonio).

Growth in the two types of tin plating (bright tin and satin tin)

In the graphs, we can see that the growth of bright tin over a copper surface which has a similar performance to the zamak. It grows rapidly after 2 months. The growth slope is very high in bright tin. However, when it is applied an intermediate Ni layer, the growth is practically zero. For the satin tin, the growth slope happens after 4 months, and it shows a slightly lower slope. After applying the Ni layer, the growth is practically zero.

The thickness of the Ni plating could be between 1μm and 2μm, while the thickness of tin plating could be maintained around 8μm. Thus, the defective threading is avoided while the whiskers were removed. However, the process was quite expensive, so this option was discarded.

Therefore, we were confronting another problem: how to eliminate the phenomenon, which implied increasing the thickness of the tin plating on the zamak alloy, but also caused the defective in threading at “F” connector. A molding modification, to provide more material on the connector, was quite expensive and involved a larger modification time, having to include inserts. However, it was right to correct the whiskers.

Another problem was raised with the stored material and the material which was being manufactured. The stored material could not be reprocessed because it had been assembled and could not be plated again. The intermediate solution was to remove the tin crystals by cleaning with compressed air.

About the material in manufacturing process (non-plating pieces), a temporary solution was to replace the tin plating. Silver plating was applied. Silver is weldable and can be applied in very thin layers, keeping the features, but it has the disadvantage that its oxide shows a dirty and stained finish, affecting to the product esthetics.

Finally, in-depth study of the phenomenon laid down that increasing the tin thickness should be standard. Defective in threading should be removed by a tool, to make correctly the threading on the connector, and the mold modification could be made, by modifying the inserts of the threaded connectors, to get a 10-20μm plating which do not fill the threadings.

CONCLUSIONS

Tin whiskers is a little-known phenomenon. It happens at the microscopic level and seems to have only been studied by agencies and national research laboratories, with strong budgets and appropriate means for its observation.

In Spain, we have found few laboratories which study it. It happens preferably in the industry, caused by the handling of materials. This job was done by my R&D team, allowing us to acquire enough knowledge to correct and prevent it, as well as to avoid its occurrence again.

However, there are many items about it on the web, which allowed us to know, analyze its causes and possible solutions.

References

  1. H. Livingston, “GEB-0002: Reducing the Risk of Tin Whisker-Induced Failures in Electronic Equipment”; GEIA Engineering Bulletin, GEIA-GEB-0002, 2003
  2. B. D. Dunn, “Whisker formation on electronic materials”, Circuit World, vol. 2, no. 4, pp.32 -40 1976
  3. R. Diehl, “Significant characteristics of Tin and Tin-lead contact electrodeposits for electronic connectors”, Metal Finish, pp.37-42 1993
  4. D. Pinsky and E. Lambert, “Tin whisker risk mitigation for high-reliability systems integrators and designers”, Proc. 5th Int. Conf. Lead Free Electronic Components and Assemblies, 2004
  5. Chen Xu, Yun Zhang, C. Fan and J. Abys, “Understanding Whisker Phenomenon: Driving Force for Whisker Formation”, Proceedings of IPC/SMEMA Council APEX, 2002
  6. I. Boguslavsky and P. Bush, “Recrystallization Principles Applied to Whisker Growth in Tin”, Proceedings of IPC/SMEMA Council APEX, 2003