Home Blog

PLC Program for Flow Totalizer

0

This is PLC Program to implement totalizer in S7-300.

Problem Description

Make PLC program to implement totalizer for flow meter. The flow meter has 4-20mA output that represents 0 to 100 liters/hour fuel flow in a pipe.

By using this logic, we can calculate total fuel passed from the pipe. When totalizer value will reach to 5000 liters, then automatically it should be reset or we can reset the value using RESET button.

Problem Diagram

PLC Totalizer

Problem Solution

We can solve this problem by simple logic. Here we consider a flow meter for measuring the fuel with maximum flow rate of 100 liters/hour.

Here we will convert this flow rate from L/H to L/Sec by using DIV instruction for calculation.

After then by using 1 second clock pulse, we will store this value in other memory location and every second new value will be added & updated.

Here for example we consider max value for totalizer is 5000 liters so after this value totalizer should be RESET.

So we will compare this value with actual value and reset it automatically or we will provide RESET button to reset the totalizer value.

Program

Here is PLC program to implement totalizer in S7-300.

List of Inputs/Outputs

Inputs List:-

M Memory:-

  • M0.5= 1 second (1s) clock pulse
  • M1.2=Positive edge of clock pulse
  • MD10=Memory word for final output (L/H) of flow meter
  • MD18= Memory word for final output (L/Sec) of flow meter
  • MD22=Total liters addition
  • MD26=Total fuel in liter

Ladder diagram for to implement Totalizer in S7-300

PLC Program for Flow Totalizer - 1

Program Description

In this problem we will consider S7-300 PLC and TIA portal software for programming.

Network 1:- Here we have taken final output value of the flow meter in L/H (MD10). By using DIV instruction we converted L/H flow into L/sec and final value stored in MD18.

Network 2:- Here clock pulse of 1s (M0.5) will add value every second and store the result in memory word MD22.

Network 3:- Here we moved value of MD22 in MD26 (total fuel in liter) for display purpose.

Network 4:- In this network we need to reset totalizer. If total fuel is greater than 5000 (5000 value is for example purpose, it is depended on flow meter configuration & it’s range ) then totalizer count should be zero automatically or we can reset by pressing RESET button (I0.0).

Note: – Above logic is for explanation purpose only. Here we have considered only final output of the scaling, so we have not mentioned 4-20mA scaling in the logic.

Runtime Test Cases

PLC Program for Flow Totalizer Simulation

Article by
Bhavesh Diyodara

Types of Variable Area Flow Meters

0

Variable Area Flow Meters

In this type of  orifice meter, there is a fixed aperture and flow is indicated by a drop in differential pressure. In area meter, there is a variable orifice and the pressure drop is relatively constant. Thus, in the area meter, flow is indicated as a function of the area of the annular opening through which the fluid must pass. This area is generally readout as the position of a float or obstruction in the orifice.

The effective annular area in area meter is nearly proportional to height of the float, plummet or piston, in the body and relationship between the height of float and flow rate is approximately linear one with linear flow curve as well as scale graduations.

Types of Variable Area Flow Meters

Area meters are of two general types :

  1. Rotameters and
  2. Piston type meter.

Rotameters

In this meter, a weighted float or plummet contained in an upright tapered tube, is lifted to the position of equilibrium between the downward force of the plummet and the upward force of the fluid in addition to the buoyancy effect of the fluid flowing past the float through the annular orifice. The flow rate can be read by observing the position of the float.

Piston Type Meter

In this meter, a piston is accurately fitted inside a sleeve and is lifted by fluid pressure until sufficient post area in the sleeve is uncovered to permit the passage of the flow. The flow is indicated by the position of the piston.

Fig. shows the types of Variable area flow meter (a) Rotameter and (b) Piston Type meter.

Piston Type Flow Meter Principle

Performance Characteristics

1. Linearity

The flow rate (volume) through a variable area meter is essentially proportional to the area and, as a result, most of these meters have essentially equal-scale increments. A typical indicating rotameters scale is non linear by about 5%.

2. Differential

An important characteristic of the variable area meter is that the pressure loss across the float is a constant. The overall differential across the meter will increase at higher flow rates because of friction losses through the fittings.

3. Accuracy

The most common accuracy is ±2% of full scale reading. This increases considerably with individual calibration and scale length. Repeatability is excellent.

4. Capacity

Variable area flow meters are the most commonly used means for measuring low-flow rates. Full scale capacities range from 0.5 cm3/min of water and 30 std cm3/min of air in the smallest units to over 1200 litres/min of water and 1700 m3/h of air in 8 cm height meters.

5. Minimum Piping Requirement

An area meter usually can be installed without regard to the fittings or lengths of straight pipe proceedings or following the meter.

6. Corrosive or Difficult to handle liquid

These can often be handled successfully in an area meter. They include such materials as oil, tar, refrigerants, sulphuric acid, black liquor, beverages, aqua regia and molten sulphur. In general, if the nature of the fluid does not permit the use of a conventional differential pressure type meter because the fluid is dirty, viscous or corrosive, certain area meters have an advantage over other types of meters.

7. Pressure Drop

By placing very light floats in over sized meters, flow rates can be handled with a combination of very low pressure loss (often 2.5 cm of water column or less) and 10 : 1 flow range.

Non Contact RADAR Level Transmitter Principle, Limitations, Design, Installation and Calibration

0

Measurement principle

A radar transmitter should be mounted on the top of a tank, chamber/cage or standpipe. The transmitter sends out microwaves via the antenna, which then travel down to the product surface. At the product surface, they are reflected back to the antenna of the radar transmitter. The propagation velocity of microwaves in free space is the speed of light (~300,000 km/s).

Non‐contact RADAR Level Transmitter Pulse

Figure – Radar measurement principle

Two different principles are used to measure the extremely short transmission times: Frequency Modulated Continuous Wave (FMCW) and pulse technology. The FMCW method emits microwaves continuously over a narrow frequency sweep. The frequency of the return reflection is slightly different from the frequency currently being transmitted, and the frequency difference is proportional to the distance. Because of multiple reflections, there are several signals mixed together. Therefore an FFT calculation has to be done internally by the radar transmitter to determine all the different single frequencies. This information is used to calculate an echo curve, from which the system can calculate the distance.

Radar FMCW principle

Figure – Radar FMCW principle

The method consists of the emission of microwave energy pulse. The time that needs to receive a return reflection is measured. This time is the image of the level (i.e. level = velocity × time). Because of the high propagation speed (300,000 km/s) the radar transmitter can repeat this several million times in a second without having any interference between the individual signals. These signals are periodical. So the sensor sees the same echo curve several million times during one second. A special sampling method makes it possible to expand the time of this fast echo curve into a slower time range.

Radar pulse principle

Figure – Radar pulse principle

FMCW and pulse technologies produce the same result: an echo curve. In the past, the lower power consumption of ‘pulse’ technology has been an advantage for building a loop‐powered radar transmitter. Nowadays, both technologies deliver the same performance. There are no longer any major differences between these two measuring principles when it comes to accuracy, dynamic range, measuring range or response time.

Radar transmitters are available with different operating frequencies. For the measurement of liquids, there are low frequencies (between 4.5 – 10 GHz) sensors and high frequency (24 – 27 GHz) sensors.

Radar sensor

Note: The higher radar frequency is, the narrower the radar bean angle of the sensor is. For example, with a 26 GHz radar and an antenna aperture of 80 mm the beam angle is about 12°. With 79 GHz radar and an antenna aperture of 80 mm the beam angle is only about 4°.

License

Local licensing requirements should be considered. Radar systems emit radio frequency energy; many countries require licensing under the communications regulatory agency when over some defined power level.

Limitations

Radar installations require consideration of vessel geometry, nozzle location and size. For that reason, the focusing (beam angle) of the antenna has to be taken into account. This is more a concern for transmitters with low frequencies (4.5 – 10 GHz) than for those with higher operating frequencies (24 – 27 GHz).

Internal obstructions such as heating coils, standpipes, agitators, etc. need to be considered. This is more a concern for transmitters with low frequencies (4.5 – 10 GHz) than for those with higher operating frequencies (24 – 27 GHz).

Installation and troubleshooting may require product manufacturer‐specific training and a laptop PC with product manufacturer software.

Pure ammonia, vinyl chloride or methyl chloride level cannot be measured with radar range of 24 – 27 GHz. This is due to the relevant gas vapour which damps the 24 to 27 GHz waves. For this application, radar within the 4.5 – 10 GHz can be used.

Heavy, thick foam has a substantial damping effect on microwaves. A particular attention should be paid to the foam formation during the design phase. If the process generates thick foam that results in excessive damping, radar is not recommended. Consequently, it is not possible to recommend any specific frequency for foam application. Sensors with increased sensitivity for foam applications are available.

Radar transmitters can measure a gas–liquid interface but not a liquid–liquid interface.

If the dielectric constant (DC) of the product is lower than 1.4, it has to be taken in account to choose the sensor or the mounting which fits the application. The dielectric DC is the ratio of electric permittivity of the product to the free space permittivity. The higher the dielectric constant, the stronger the signal reflected by the product. The dielectric constant of the product has less influence on the accuracy, because it changes only the amplitude and not the position of the echo on the echo curve, but more influence on the reliability of the measurement. See below Figure.

Radar with 26 GHz

Figure – Radar with 26 GHz

With modern high quality radar, it is nowadays possible to measure even the products with the lowest dielectric constant like LNG/LPG in liquids. For older or less sensitive radar transmitters, it might be required to install a stilling well to measure LNG/LPG. Some product manufacturers have models available with a special electronic. This special electronic increases the sensitivity to allow the LNG/LPG measurement without a stilling well. In general, for products lower than DC <1.4, the product manufacturer’s recommendation should be required.

product dielectric constant

Table – Example of product dielectric constant

Note: Due to the low dielectric constant of some material, it’s possible to measure through plastic, glass or ceramic.

Radar measuring the level through the plastic vessel

Figure – Radar measuring the level through the plastic vessel

Selection

Very strong turbulences on the product surfaces are able to reflect the microwaves in different directions. This will decrease the echo amplitude as well. In very strong turbulences, like a reactor with a strong agitator this has to be considered for the mounting position of radar. The radar transmitter should be positioned behind a baffle. This allows a very easy and repeatable measurement.

Whether Radar is used on turbulence surface a stilling well or sensor cage should be considered. Sensor cage or stilling well should minimum have same ID as radar horn diameter.

Vapour condensation and deposits can affect the radar measurement performance. In this case, necessary heat tracing and a round piece of PTFE may be installed in the mounting flange to prevent the accumulating on the radar gauge cone. A use of a purge may also be considered.

The use of a PTFE shield on the radar cone prevents corrosion.

Radar mounted directly on vessel which cannot be shut‐downed should be provided with isolation valves (full bore type).

Radar transmitters are available with a wide range of antenna designs and sizes for different applications.

Radar transmitters are available with different materials, seals and housings to fit the process conditions and environment.

Low frequency (4 – 10 GHz) is preferred when measuring in vapour and foam.

High frequency (above 25 GHz) is preferred in most other applications due to greater mounting flexibility. (A small beam angle is easier to install.)

High Frequency Microwaves are suitable for most application, have less installation considerations, have narrow beam angle avoids disturbances more easily and provide longer measuring range due to more focused energy.

Low Frequency Microwaves provide longer wave lengths which penetrate foam, heavy vapour and condensation more easily. Wide beam angle can in some cases pass disturbances more easily (when the disturbing echo is located directly under the radar).

Design

Different antenna sizes and types are available to fit with the hook up requirements.

Radar antenna shapes

Figure – Radar antenna shapes

There are different antenna sizes and designs available. Bigger diameter horn antenna is the preferred solution as this provides a more focused beam. However, the antenna also needs more space and therefore the process connection is getting bigger.

The encapsulated horn antenna is especially made for aggressive or corrosive applications. The only material which is in contact with the medium is PTFE or PFA (there are no seals or metal parts in contact with the medium).

Installation

Radar transmitters can be directly mounted on the top of the vessel, without any valve or standpipe. This installation principle can be used if the process can be shut‐downed when the radar needs maintenance. When radars are installed in a nozzle, actual vessels or tank drawings should be checked before selecting the antenna size (e.g. Nozzle IDs may be smaller than expected. This typically occurs in high rating flange or specific features such as Long Weld Neck flanges).

Radar Direct top vessel Installation

Figure – Radar Direct top vessel Installation

The sensor should be mounted perpendicular to the surface. The mounting socket on top of a vessel should be as short as possible. In the case of instruments with horn antennas, the length of the socket should be less than the length of the horn antenna.

Radar sensor alignment

Figure – Radar alignment

Radar socket or nozzle radar installation

Figure – Radar socket or nozzle radar installation

If an access to the antenna is required (e.g. maintenance/operation) a full bore ball valve should be used.

Note: Electronic of non‐contact measurement are generally removable without a process shutdown.

This reduces the influence on the microwave and allows a reliable and accurate measurement. Full bore valve is a solution if there is need to remove the cone antenna for maintenance (cleaning). Radar electronic can be changed without opening the tank.

Radar full port ball valve

Figure – Radar full port ball valve

When installing a non‐contacting radar sensor on a stilling well or chamber/cage tube, the ID should minimum be the horn diameter.

Note: If the stilling well or chamber/cage ID is not constant, special parameter can be configured.

Radar installation using stilling well

Figure – Radar installation using stilling well

Radar installation using chamber

Figure – Radar installation using chamber/cage tube

For applications in insulated vessels, it is recommended to insulate also the nozzle, ball‐valve, flange and part of the instrument. This prevents condensation and build‐up on the antenna and nozzle and increases the reliability and security for the measurement.

Floating Roof tanks

In some floating roof applications, it may be beneficial, or even the only way, to use the radar to measure to the floating roof instead of the liquid. The sensor should be mounted perpendicular to the surface. In this application, the radar tracks the roof instead of the liquid. An offset should be entered in the radar to allow for the roof thickness.

The radar will track the level down to where the roof leg lands. When the legs have landed, the radar will show the position of the roof even if the level is significantly lower.

The radar level accuracy is limited to how well the roof is following the liquid. There are seal frictions that influence how freely the roof is moving up and down. In some cases, the roof can stick during filling and emptying and that could result in measurement errors. If a lot of snow or water is collected on the roof, the radar gauge could start to measure to the snow/water instead of the roof.

If the radar is installed in an external floating roof tank the radar needs to have a Federal Communications Commission (FFC) license. The radar gauge has an FCC part 15 license which is valid for a regular tank installation. In the external floating roof case, the radar gauge is in ‘open air’ and a FFC part 90; a license needs to be provided. The application is straightforward and can be made online. This paragraph is pertinent for the installations located on the United States of America territory or where the USA regulations are mandatory and where FCC regulation applies.

The radar needs a horizontal reflector installed on the roof if the roof is not flat. This is normally the case for external floating roof tanks where the pontoon has a slight angle. The reflector needs to be a minimum 50 mm × 50 mm (75 mm × 75 mm is generally used). The reflector should be installed in an area on the floating roof with as few metal obstructions as possible and horizontally mounted.

Calibration and configuration

Non‐contact Radars are initially calibrated in factory with an initial dielectric value (e.g. 1.6).

  • Dry calibration: Zero and Full scale values are adjusted manually. These scale values represents the minimum and maximum level to be measured. These settings can be made in-situ or not.
  • Wet calibration: wet calibration is necessary to take into account all the false echoes due to the internal vessel shape. The electronic record these false echoes. These false echoes are filtered and are no longer taken into account during the level measurement. This wet calibration should be performed with the actual low level so that all potential interfering reflections are detected.

Source : International Association of Oil & Gas Producers

Acknowledgements : IOGP Instrumentation and Automaton Standards Subcommittee (IASSC), BG Group, BP, Endress + Hauser, Emerson, Honeywell, Krohne, Petrobras, PETRONAS Carigali Sdn Bhd, Repsol, Siemens, Statoil, Total, Vega, Yokogawa.

Capacitance Level Sensor Principle, Limitations, Installation & Calibration

0

Measurement principle

The capacitive measuring principle is based on the method of the operation of a capacitor.

A capacitor is formed by two differently charged electrodes isolated from each other. Applying an alternating current between the electrodes will create an electric field. This electrical field depends on the distance between the electrodes, the size of electrodes surface, and the isolating medium between the electrodes.

If the distance between electrodes and size of surface of the electrodes are kept constant, only the medium would have an effect on the electrical capacitance. When the medium change the electrical field changes also consequently the capacitance evolves as follows:

  • Capacitance (C) = Dielectric constant (Ɛ0) × Relative Dielectric constant (DC) × Electrode Surface Area Where the dielectric constant (Ɛ0) is the electric field constant (Ɛ0 = 8.8 × 10‐12 C/(Vm).

Capacitance Level measurement principle

Figure – Capacitance measurement principle

Media with a low dielectric constant (DC value) cause very small changes of the capacitance value in level measurement while media with a high DC value produce respectively large capacitance changes. In many interface applications, the medium with the lower DC value is on top, e. g. hydrocarbon (DC = 2) on water (DC = 80).

The upper medium provides only a minimum contribution to the overall capacitance value – only the water level (the interface layer) is thus indicated as level.

In order to make use of this effect, the DC value of the two media should be sufficiently different from each other.

Usually a medium with a low DC value is non‐conductive while a medium with a high DC value is conductive. Therefore interface measurement with a non‐conductive and a conductive medium is always possible.

Limitations

Capacitance sensor Operating Range

Figure – Capacitance operating range

If a process coats or fouls a capacitance probe, a compensation option may be required to prevent false highlevel readings.

Continuous level capacitance transmitters require that the liquid being measured remains at a constant dielectric value. If this is not the case, the transmitter should have the capability to compensate for the liquid dielectric variation.

Probes mounted directly in the vessel typically cannot be replaced with the process in service unless they are mounted in a sensor cage with isolation valves.

The rod probes require sufficient height clearance, depending on the length of the probe.

It cannot measure liquids which have a viscosity above 2000 cst.

Selection

The capacitive level measurement can be used in aggressive media when a fully coated probe (e.g. PTFE) is used.

Capacitive measurement has a very fast response time which makes it ideal for processes with fast level changes and small containers.

The measurement principle is not affected by the density variation of the media.

For interface measurements a conductive and non‐conductive media is required.

At this interface the difference between the conductivity of the conductive media should be greater than 100 μS/cm and the conductivity of the non‐conductive media should be lower than 1 μS/cm.

An oil‐water emulsion has all the conductivity range between 1 and 100 μS/cm depending on the oil‐water bubble repartition. This means that a capacitance probe will detect the media above 100 μS/cm (i.e. conductive media) and will not detect the emulsion layer (between 1 and 100 μS/cm) as well as the non-conductive media layer (i.e. <1 μS/cm).

Non‐conductive build‐up on the probe affects the measurement.

Design

The probes should be made of metallic, conductive electrode with full plastic insulation regardless of the conductivity of the medium.

When mounted, a good electrically conductive connection between the process connection and the tank should be ensured. An electrically conductive sealing band can be used.

Rod probes with a ground tube should be used in the event of severe lateral loads.

The length of the probe should be designed in accordance with level measurement range.

Installation

Capacitance Level Sensor installation

Figure – Capacitance Level Sensor installation

The vessel earthing (grounding) method, which can be critical to the operation of the device, should be assessed.

Calibration and configuration

Capacitance probes are calibrated at the factory for media with a conductivity ≥100 μS/cm (e.g. for all waterbased liquids, acids, alkalis…).

A site calibration is only necessary if the 0%‐value or the 100%‐value should be adjusted to suit specific measurement requirements (e.g. tank/capacitance distance <250 mm, conductivity <100 μS/cm or specific range).

A distinction is generally made between two types of calibration:

  • Wet calibration: The probe can be calibrated for its full range i.e. lower level (0% level calibration) and high level (100% level). Other intermediate values can also be performed.
  • Dry calibration: The level capacitance can be simulated by entering the low and high level values. Capacitance units will calculate automatically the capacitance variation image based in the factory calibration for a conductivity ≥100 μS/cm.

Capacitance Level Sensor calibration

Figure – Capacitance Level Sensor calibration

Source : International Association of Oil & Gas Producers

Acknowledgements : IOGP Instrumentation and Automaton Standards Subcommittee (IASSC), BG Group, BP, Endress + Hauser, Emerson, Honeywell, Krohne, Petrobras, PETRONAS Carigali Sdn Bhd, Repsol, Siemens, Statoil, Total, Vega, Yokogawa.

Falling Ball Viscometer Principle

0

Falling Ball Viscometer uses the simple — but precise — Höppler principle to measure the viscosity of Newtonian liquids by measuring the time required for a ball to fall under gravity through a sample-filled tube.

The principle of the viscometer is to determine the falling time of a ball of known diameter and density through a close to vertical glass tube of known diameter and length, filled with the fluid to be tested. The viscosity of the sample liquid is related to the time it takes for the ball to pass a distance between two specified lines on the cylindrical tube. Turning the measurement tube results in returning of the ball and it is possible to re-measure the time over the same distance. The result is dynamic viscosity with the standard dimension.

Velocity of a ball which is falling through a liquid in a tube is dependent on the viscosity of the liquid. When the ball moves through the liquid, it is affected by the gravity, buoyancy and frictional forces: Gravity as downward force, buoyancy and friction as the upward forces.

Falling Ball Viscometer Principle

How It Works

The Falling Ball Viscometer is based on the measuring principle by Höppler for simple but precise dynamic viscosity measurement of transparent Newtonian fluids. The basic concept is to measure the elapsed time required for the ball to fall under gravity through a sample-filled tube inclined at an angle*. The tube is mounted on a pivot bearing which quickly allows rotation of the tube 180 degrees, thereby allowing a repeat test to run immediately. Three measurements are taken and the average time it takes for the ball to fall is the result. A conversion formula turns the time reading into a final viscosity value.

Calibration Procedure

  • Fill the falling tube with studied liquid and put in the ball cautiously. Add more liquid till no air bubbles can be seen. Then close the falling tube by its cap.
  • Before starting the measurement, it is better to turn the falling tube up and down at least once in order to enhance temperature uniformity along the tube.
  • Turn the falling tube 180 degree. Start stop watch when the ball reaches to the first marks on the tube, and measure the time between the two marks. For better and more accurate results it is recommended to repeat the measurement 10 times at each temperature.
  • After changing the bath temperature, it’s highly recommended to wait at least 20 minutes to ensure temperature stability for the sample.
  • At the end of the experiment, empty the tube from the liquid and remove the ball from the tube very carefully. Clean the tube with suitable solvent and/or a brush.
  • Write down the density of liquid and ball respectively. Calculate the average time “t” for each temperature and calculate the viscosity using equation

Calibration

For calculating the viscometer constant, do all the above steps with a liquid which has a known viscosity such as distilled water but use the dynamic viscosity of distilled water in equation and calculate the viscometer constant, K.

Notes:

  • The liquid in the falling tube should be free of bubbles.
  • Generally the measurement for calibration is done at 20°C.
  • Measuring of the time starts when the lower edge of the ball touches the upper mark and ends when crosses the lower mark.

Practical Pneumatic Instruments

0

To better understand the design and operation of self-balancing pneumatic mechanisms, it is helpful to examine the workings of some actual instruments. In this section, we will explore three different pneumatic instruments: the Foxboro model 13A differential pressure transmitter, the Foxboro model E69 I/P (electro-pneumatic) transducer, the Fisher model 546 I/P (electro-pneumatic) transducer, and the Fisher-Rosemount model 846 I/P (electro-pneumatic) transducer.

Foxboro model 13A differential pressure transmitter

Perhaps one of the most popular pneumatic industrial instruments ever manufactured is the Foxboro model 13 differential pressure transmitter. A photograph of one with the cover removed is shown here:

Pneumatic differential pressure transmitter

A functional illustration of this instrument identifies its major components:

Pneumatic differential pressure transmitter Parts

Part of the reason for this instrument’s popularity is the extreme utility of differential pressure transmitters in general. A “DP cell” may be used to measure pressure, vacuum, pressure differential, liquid level, liquid or gas flow, and even liquid density. A reason for this particular differential transmitter’s popularity is its excellent design: the Foxboro model 13 transmitter is rugged, easy to calibrate, and quite accurate.

Like so many pneumatic instruments, the model 13 transmitter uses the force-balance (more precisely, the moment-balance) principle whereby any shift in position is sensed by a detector (the baffle/nozzle assembly) and immediately corrected through negative feedback to restore equilibrium. As a result, the output air pressure signal becomes an analogue of the differential process fluid pressure sensed by the diaphragm capsule. In the following photograph you can see my index finger pointing to the baffle/nozzle mechanism at the top of the transmitter:

baffle_nozzle mechanism

Let’s analyze the behavior of this transmitter step-by-step as it senses an increasing pressure on the “High pressure” input port. As the pressure here increases, the large diaphragm capsule is forced to the right. The same effect would occur if the pressure on the “Low pressure” input port were to decrease. This is a differential pressure transmitter, meaning it responds to fluid pressure differences sensed between the two input ports.

This resultant motion of the capsule tugs on the thin flexure connecting it to the force bar. The force bar pivots at the fulcrum (where the small diaphragm seal is located) in a counter-clockwise rotation, tugging the flexure at the top of the force bar. This motion causes the range bar to also pivot at its fulcrum (the sharp-edged “range wheel”), moving the baffle closer to the nozzle.

As the baffle approaches the nozzle, air flow through the nozzle becomes more restricted, accumulating backpressure in the nozzle. This backpressure increase is greatly amplified in the relay, sending an increasing air pressure signal both to the output line and to the bellows at the bottom of the range bar. Increasing pneumatic pressure in the bellows causes it to push harder on the bottom of the range bar, negating the initial motion (Note) and returning the range bar (and force bar) to their near-original positions.

Note : This negating action is a hallmark of force-balance systems. When the system has reached a point of equilibrium, the components will have returned to (very nearly) their original positions. With motion-balance systems, this is not the case: one component moves, and then another component moves in response to keep the baffle/nozzle detector at a near-constant gap, but the components definitely do not return to their original positions or orientations.

Calibration of this instrument is accomplished through two adjustments: the zero screw and the range wheel. The zero screw simply adds tension to the bottom of the range bar, pulling it in such a direction as to further oppose the bellows’ force as the zero screw is turned clockwise. This action attempts to push the baffle closer to the nozzle and therefore increases air pressure to the bellows to achieve equilibrium. Turning the range wheel alters the lever ratio of the range bar, changing the ratio of capsule force to bellows force and thereby adjusting the transmitter’s span. The following photograph shows the range bar and range wheel of the instrument:

Pneumatic Calibration

As in all instruments, the zero adjustment works by adding or subtracting a quantity, while the span adjustment works by multiplying or dividing a quantity. In the Foxboro model 13 pneumatic transmitter, the quantity in question is force, since this is a force-balance mechanism. The zero screw adds or subtracts force to the mechanical system by tensioning a spring, while the range wheel multiplies or divides force in the system by changing the mechanical advantage (force ratio) of a lever.

force-balance mechanism

Foxboro model E69 “I/P” electro-pneumatic transducer

The purpose of any “I/P” transducer is to convert an electrical signal into a corresponding pneumatic signal. In most cases, this means an input of 4-20 mA DC and an output of 3-15 PSI, but alternative ranges do exist. An example of an I/P transducer manufactured by Foxboro is the model E69, shown here:

electro-pneumatic transducer

Two pressure gauges indicate supply and output pressure, respectively. Wires convey the 4-20 mA electrical signal into the coil unit inside the transducer.

A view with the cover removed shows the balancing mechanism used to generate a pneumatic pressure signal from the electric current input. The baffle/nozzle may be seen at the left of the mechanism, the nozzle located at the end of a bent tube, facing the flat baffle on the surface of the circular coil unit:

baffle nozzle Operation

As electric current passes through the coil, it produces a magnetic field which reacts against a permanent magnet’s field to generate a torque. This torque causes the coil to rotate counterclockwise (as viewed in the picture), with the baffle connected to the rotating assembly. Thus, the baffle moves like the needle of an analog electric meter movement in response to current: the more current through the coil, the more the coil assembly moves (and the baffle moves with it).

The nozzle faces this baffle, so when the baffle begins to move toward the nozzle, back-pressure within the nozzle rises. This rising pressure is amplified by the relay, with the output pressure applied to a bellows. As the bellows expands, it draws the nozzle away from the advancing baffle, achieving balance by matching one motion (the baffle’s) with another motion (the nozzle’s). In other words, the nozzle “backs away” as the baffle “advances toward:” the motion of one is matched by the motion of the other, making this a motion-balance instrument.

A closer view shows the baffle and nozzle in detail:

baffle nozzle

Increased current through the wire coil causes the baffle to move toward the right (as pictured) toward the nozzle. The nozzle in response backs away (also to the right) to hold the baffle/nozzle gap constant. Interestingly, the model E69 transducer employs the same pneumatic amplifying relay used in virtually every Foxboro pneumatic instrument:

Foxboro pneumatic instrument

This amplifying relay makes the system more responsive than it would be otherwise, increasing sensitivity and precision. The relay also serves as an air volume amplifier, either sourcing (supplying) or sinking (venting) air to and from a control valve actuator much more rapidly than the nozzle and orifice could do alone.

As in all instruments, the zero adjustment works by adding or subtracting a quantity, while the span adjustment works by multiplying or dividing a quantity. In the Foxboro model E69 transducer, the quantity in question is motion, since this is a motion-balance mechanism. The zero adjustment adds or subtracts motion by offsetting the position of the nozzle closer to or farther away from the baffle. A close-up photograph of the zero adjustment screw shows it pressing against a tab to rotate the mounting base plate upon which the coil unit is fixed. Rotating this base plate add or subtracts angular displacement to/from the baffle’s motion:

Baffle Plate

The span adjustment consists of changing the position of the nozzle relative to the baffle’s center of rotation (axis), so that a given amount of rotation equates to a different amount of balancing motion required of the nozzle. If the nozzle is moved farther away from the baffle’s axis, the same rotation (angle) will result in greater nozzle motion (more output pressure) because the nozzle “sees” greater baffle movement. If the nozzle is moved closer toward the baffle’s axis, the same rotation (angle) will result in less nozzle motion (less output pressure) because the nozzle “sees” less baffle movement. The effect is not unlike the difference between a baseball striking the tip of a swung bat versus striking in the middle of a swung bat: the baseball struck by the tip of the bat “sees” a faster-moving bat than the baseball struck by the middle of the bat.

This span adjustment in the E69 mechanism consists of a pair of nuts locking the base of the bellows unit at a fixed distance from the baffle’s axis. Changing this distance alters the effective radius of the baffle as it swings around its center, therefore altering the gain (or span) of the motion balance system:

motion balance system

Fisher model 546 “I/P” electro-pneumatic transducer

The Fisher model 546 I/P transducer performs the same signal-conversion function (mA into PSI) as the Foxboro model E69, but it does so quite differently. The following photograph shows the internal mechanism of the model 546 transducer with its cover removed:

I to P electro-pneumatic transducer

This particular instrument’s construction tends to obscure its function, so I will use an illustrative diagram to describe its operation:

Current to Pressure Converter

The heart of this mechanism is a ferrous (substance containing the element iron) beam, located between the poles of a permanent magnet assembly, and centered within an electromagnet coil (solenoid). Current passing through the electromagnet coil imparts magnetic poles to the ends of the beam. Following the arrow head/tail convention shown in the coil windings (the dots versus X marks) representing conventional flow vectors pointing out of the page (top) and going into the page (bottom) for the coil wrapped around the beam, the right-hand rule tells us that the beam will magnetize with the right-hand side being “North” and the left-hand side being “South.” This electro-magnetic polarity interacts with the permanent-magnetic poles to torque the beam clockwise around its pivot point (fulcrum), pushing the right-hand side down toward the nozzle.

Any advance of the beam toward the nozzle will increase nozzle back-pressure, which is then fed to the balancing bellows at the other end of the beam. That bellows provides a restoring force to the beam to return it (nearly) to its original position. The phenomenon of an input force being counter-acted by a balancing force to ensure negligible motion is the defining characteristic of a force-balance system. This is the same basic principle applied in the Foxboro model 13 differential pressure transmitter: an input force countered by an output force.

If you examine the diagram carefully, you will notice that this instrument’s amplifying relay is not located within the force-balance feedback loop. The nozzle’s back-pressure is directly fed back to the balancing bellows with no amplification at all. A relay does exist, but its purpose is to provide a modest (approximately 2:1) pressure gain to raise the nozzle back-pressure to standard levels (3-15 PSI, or 6-30 PSI).

The next photograph shows the solenoid coil, force beam, and nozzle. If you look closely, you can see the copper-colored windings of the coil buried within the mechanism. The zero-adjustment spring is located above the beam, centered with the nozzle (below the beam):

solenoid coil, force beam, and nozzle

Fisher manufactured these I/P transducers with two different pneumatic ranges: 3-15 PSI and 6-30 PSI. The mechanical difference between the two models was the size of feedback bellows used in each. In order to achieve the greater pressure range (6-30 PSI), a smaller feedback bellows was used. This may seem backward at first, but it makes perfect sense if you mentally follow the operation of the force-balance mechanism. In order to generate a greater air pressure for a given electric current through the coil, we must place the air pressure at a mechanical disadvantage to force it to rise higher than it ordinarily would in achieving balance. One way to do this is to decrease the effective area of the bellows, so that it takes a greater air pressure to generate the same amount of balancing force on the beam.

Ultrasonic Level Transmitter Principle, Limitations, Calibration and configuration

0

Measurement principle

Continuous non‐contacting ultrasonic level measurement is based on the time of flight principle.

An ultrasonic level instrument measures the time between sound energy transmitter from the sensor, to the surface of the measured material and the echo returning to the sensor.

As the speed of sound is known through the travel medium at a measured temperature, the distance to the surface is calculated. Level can be calculated from this distance measurement.

Echo Processing built in to the instrument can allow the instrument to determine the material level of liquids, solids or slurries even in narrow, obstructed or agitated vessels.

Limitations

Ultrasonic is seldom used in upstream hydrocarbon process stream for level measurement; it might be used in atmospheric utilities applications. In applications which are susceptible to vapour density variation, compensation reference pin should be used.

Maximum measurement distance should be checked against the technology (above 30 m the reflectivity may be reduced and might cause a measurement error/problem).

Ultrasonic sensors have, as physical limitation, a blocking distance (close to the sensor) where they cannot measure reliably, e.g. 0.25 metres.

Vessel pressure limitation should approximately be, e.g. 0.5 bar or less. Higher pressure may introduce uncertainty in the level measurement.

Vapour, vacuum or temperature gradients can influence the speed of sound and consequently can cause incorrect measurements.

Presence of foam or heavy turbulence on the surface of the measured material can cause unreliable measurement.

Selection

As ultrasonic is non‐contacting, even abrasive or aggressive materials can be measured. Vessel height and head room should be considered to select an instrument with suitable minimum and maximum range.

Design

Ultrasonic sensors should be made of a material suitable for the measured medium (e.g. PVDF or ETFE) Solid construction and a self‐cleaning action on the face of the sensor provide a reliable, low maintenance product.

Ultrasonic Level Transmitters Installation

Figure – Ultrasonic Liquid Measurement Arrangement.

Use of a submergence shield on a sensor will allow an ultrasonic instrument to operate in potential flooding conditions reporting a full vessel to a control system or continuing to operate pumps to remove the flood condition.

Calibration and configuration

Ultrasonic Level Transmitters Calibration and configuration

Figure – Ultrasonic Level Transmitter calibration

  • BD : Blocking Distance
  • SD : Safety Distance
  • E : Empty Calibration ( Zero Point )
  • F : Full Calibration ( Span )
  • D : Nozzle Diameter
  • L : Level

Performing an initial or ‘empty calibration’: In this principle, ‘enter’ the distance E from the sensor face to the minimum level (zero point). It is important to note that in vessels with parabolic roofs or bottoms, the zero point should not be more distant than the point at which the ultrasonic wave reflects from the tank bottom.

When possible, a flat target plate that is parallel to the sensor face and directly below the sensor mounting position should be added to the bottom of the vessel for best empty tank performance.

Once the empty distance has been set, the high calibration point or 100% full point can be set. This is done either by setting the distance from the sensor face to the 100% full level or by entering a span (level) from the 0% or low calibration point to the 100% full level.

During commissioning, ensure that the 100% full or high calibration point does not enter the ‘blocking distance’ or ‘blind zone’ of the respective sensor. This will vary from sensor to sensor. Blocking distances or blind zones can be extended to avoid false high level reflections caused by obstructions, but they can only be reduced to a certain distance due to the physical limitations of the sensor itself. The minimum level (distance E/zero point) should be configured. This zero point should be above any dished boiler heads or conical outflow located at the bottom of the tank/vessel.

The maximum level (distance F/full span) should be configured. This distance F should take into account both BD ‘blocking distance’ and SD ‘safety distances’.

Where BD represents a dead zone in which the wave cannot make any measurement and SD corresponds to a warning or an alarm zone.

Also Read : Ultrasonic Level Sensors Questions & Answers.

Source : International Association of Oil & Gas Producers

Acknowledgements : IOGP Instrumentation and Automaton Standards Subcommittee (IASSC), BG Group, BP, Endress + Hauser, Emerson, Honeywell, Krohne, Petrobras, PETRONAS Carigali Sdn Bhd, Repsol, Siemens, Statoil, Total, Vega, Yokogawa.

Displacer (buoyancy) Level Transmitter Principle, Limitations, Design, Installation & Calibration

0

Measurement principle

The principle of displacement level measurement is based on Archimedes Principle. Displacement instruments determine liquid level by sensing the buoyant force exerted on a displacer by the liquid it displaces. Unlike floats, in float‐type level instruments, the displacer moves very little relative to the rising or falling liquid.

Interface liquid–liquid level calculation example

The apparent Force (Fa) = Buoyancy weight (F) – Archimedes force (Pa)     …See Below Figure

The apparent mass is Fa/g = Ma = m – ρ1 x S x H – S x hinterface x [ ρ2 – ρ1 ]             (Equation [1])

The range is:

  • At hinterface = 0  then    Ma = m – ρ1 x S x H
  • At hinterface = H  then   Ma = m – ρ2 x S x H

Displacement Measurement

Figure – Displacement Measurement

ρ1 : Liquid1 Density (kg/m3)
ρ2 : Liquid2 Density (kg/m3)
hinterface : Interface Level between the Liquid 1 and Liquid 2 (m)
g: 9.81 (m/s²)
S: Displacer section (m²)
H: Displacer length (m)

Interface measurement requires its own connection into the upper and the lower phase. Equation [1] is applicable if there is only one variable. For an interface level measurement, there should be an interface. Equation [1] can be used to measure the density of one fluid. In this case, the displacer is fully immersed in the fluid.

Limitations

The level reading from the displacer can be incorrect if the temperature and/or density of the liquid in the vessel is different from that of the liquid in the external cage.

Unreliable measurements are due to dirty, foaming, fouling service as well as turbulent fluid or presence of solid particles in the fluid (e.g. sand).

Vibration (e.g. false alarm) and corrosion affect the level measurement.

Displacers can measure only the range of the displacer length. If the level rises above the top of the displacer, the displacer cannot measure the level.

Displacement transmitters can have higher maintenance needs. Many faults are due to encrustation, freezing of the torque tube, displacer hanger broken or detached, failure of electronic detector angular motion, displacer stuck, and displacer mass change due to corrosion.

Additional features may be required to eliminate turbulent liquid effects on the displacer.

Displacement transmitters can be much more difficult to calibrate, particularly if used for interface measurement.

Removal of the displacer from a vessel may require special rigging.

Displacers are available in a few standard lengths, e.g. 0.36 m to 0.81 m lengths being most common.

Selection

Displacement transmitters can be used in a wide range of temperatures and pressures.

Displacement transmitters should be suitable for interface level measurement if specific gravities differ significantly and the change in specific gravity due to composition or temperature cannot affect the reading. It is admitted that the difference between specific gravities is greater than 0.1 (if the gravity difference of 0.1 is used, the impact on the accuracy needs to be assessed).

Displacement type level instruments should not be used in severely turbulent, dirty, foaming, fouling service or in case of presence of solid particles in the fluid (e.g. sand). These conditions lead to unreliable measurements from displacement level instruments.

Displacement type level instruments should not be used for liquid‐liquid interfaces where there is potential emulsion forming.

Displacement type level instruments should not be used in liquid‐liquid or liquid–gaseous services where either the upper or lower fluid specific gravity is not relatively constant.

Displacement transmitters can be use also for density measurement if the displacer is permanently and fully immersed in a single fluid.

Design

Displacers should be made of stainless steel or other material compatible with the process fluid.

Displacer should have the height according to the level range for the application.

Installation

The preferred installation for displacers should be a cage/chamber and installed externally to the vessel. Block, drain and vent valves should be installed to fill and empty the chamber to carry out maintenance activities.

Vessel nozzles should be located with respect to measuring interface level.

Instrument connections directly at the bottom of the vessel should be avoided.

Calibration and configuration

Calibration should be performed at Product Manufacturer premises and verified prior to the commissioning activities.

Calibration may be performed in situ or on a bench calibration with weights. Bench calibration with weights should be performed using the apparent mass.

In situ calibration should be performed using a level gauge or sight glass if fitted. Otherwise, a clear flexible external tube could be used.

Below Figure describes a typical arrangement which should be used to calibrate chamber mounted instruments in-situ.

Displacement Level Transmitter in situ calibration

Figure – Displacement in-situ calibration

Source : International Association of Oil & Gas Producers

Acknowledgements : IOGP Instrumentation and Automaton Standards Subcommittee (IASSC), BG Group, BP, Endress + Hauser, Emerson, Honeywell, Krohne, Petrobras, PETRONAS Carigali Sdn Bhd, Repsol, Siemens, Statoil, Total, Vega, Yokogawa.

PLC Program for IEC timers ( TON, TOF, TP &TONR ) used in S7-1200

0

This is PLC Program for IEC timers (TON, TOF, TP &TONR) used in S7-1200 using TIA Portal.

Problem Description

Implementation of IEC timers (TON, TOF, TP &TONR) in S7-1200 PLC using TIA Portal.

In many applications there is a requirement to control time or signal flow. For example, a valve, a motor might need to be controlled to operate for a particular interval of time, switched ON after some time interval or after some delay.

Problem Diagram

Problem Solution

For this problem we will use IEC timers (TON, TOF, TP &TONR) in S7-1200 PLC with examples.

There are number of different forms of timers that can be found in PLCs. As shown in above diagram,

  • ON delay timer which becomes ON after a particular time delay.
  • Off delay timers are ON for fixed period of time after turning OFF input.
  • Pulse timer switches ON or OFF for fixed period of time.
  • Accumulator timer is which records time interval.

Here consider example of four motors and four SWITCHES for explanation of timers. We need to start three motors in different ways.

  1. First motor will start after 10s delay,
  2. second motor start immediately and off after 10s delay and
  3. third motor will start with pulse and off with 10s delay.
  4. Fourth motor will run for total 10s.

Program

Here is PLC program to implement IEC timers (TON, TOF, TP &TONR) in S7-1200 PLC.

List of Inputs/Outputs

Inputs List:-

  • SWITCH 1 = I0.0
  • SWITCH 2 = I0.1
  • SWITCH 3 = I0.2
  • SWITCH 4 = I0.3
  • Reset = I0.4

Outputs List:-

  • LED 1 :- Q0.0
  • LED 2 :- Q0.1
  • LED 3 :- Q0.2
  • LED 4 :- Q0.3

Ladder diagram for IEC (TON, TOF, TP &TONR) in S7-1200 PLC.

We can use the Generate-ON-delay or ON delay timer instruction to delay setting of the Q output by the programmed duration PT. The instruction is started when the result of the input IN changes from 0 to 1 (positive edge).

You can monitor the current time value at the ET output of the Timer block. The timer value starts at T#0s and ends when the value of duration PT is reached. The ET output is reset as soon as the signal state at the IN input changes to 0.

PLC Program for IEC timers

We can use the Generate off-delay or off delay timer instruction to delay resetting of the Q output by the programmed duration PT. The Q output is set when the result of logic operation (RLO) at input IN changes from 0 to 1 (positive signal edge). We can monitor the current time value at the ET output.

We can use the Generate pulse instruction to set the output Q for a programmed duration. The instruction is started when the result of the input IN changes from 0 to 1 (positive edge). The programmed time (PT) begins when the instruction starts. In this timer even if a new positive edge is detected, the signal state at the output Q is not affected as long as the PT time duration is running.

The Time accumulator instruction or accumulator timer is used to accumulate time values within a period set by the programmed time (PT) parameter. When the signal state at input IN changes from 0 to 1 (positive edge), the instruction is executed and the duration PT starts. In this case the Q parameter remains set to 1, even when the signal state at the IN parameter changes from 1 to 0″(negative edge). The R input resets the outputs Q.

Program Description

In this problem we will consider S7-1200 PLC and TIA portal software for programming.

Network 1 :- In this network we have used ON delay timer (generate on delay) for MOTOR 1(Q0.0).When the status of the SWITCH 1(I0.0) changes from 0 to 1 the timer instruction will be executed and it will activate the MOTOR 1(Q0.0) after 10s delay.

Network 2 :- In this network we have used off delay timer (generate off delay) for MOTOR 2(Q0.1).When the status of the SWITCH 2(I0.1) changes from 0 to 1 the timer instruction will be executed and it will activate the MOTOR 2(Q0.1) immediately. When the SWITCH 2(I0.1) status changes back to 0 then programmed time (PT) will start and after time MOTOR 2(Q0.1) will be OFF.

Network 3 :- In this network we have used pulse timer (generate pulse) for MOTOR 3(Q0.2).When the status of the SWITCH 3(I0.2) changes from 0 to 1 the timer instruction will be executed and it will activate the MOTOR 3(Q0.2) immediately. In this case even new positive edge is detected, the status of the MOTOR 3(Q0.2) is not affected as long as programmed time (PT) is running.

Network 4 :- In this network we have used accumulator timer (accumulator time) for MOTOR 4(Q0.3).When the status of the SWITCH 4(I0.3) changes from 0 to 1 the timer instruction will be executed and MOTOR 4(Q0.3) will start after 10s. The MOTOR 4(Q0.2) will remain ON, even when the input status changes back to 0.The Reset (I0.4) is necessary to reset the timer or accumulated time.

Runtime Test Cases

IEC timers used in S7-1200 PLC

Article by
Bhavesh Diyodara

PLC Program for Blinking Lamp on 5 Seconds Interval

0

This is PLC Program for Blinking (ON/OFF) lamp on 5 seconds interval.

Problem Description

Making the Indicator or lamp ON after five seconds and OFF after five seconds.

Make a program which switch ON lamp for 5 seconds, then OFF for 5 seconds, then ON for 5 seconds & again OFF for 5 seconds, and so on.

Problem Diagram

PLC Program for Blinking Lamp on 5 Seconds Interval

Problem Solution

This problem can be solved by using timers. In this case we will use TON (ON Delay Timer).

For explanation we consider one SWITCH for enabling the ON/OFF cycle and one lamp for output.

When user presses the SWITCH then lamp will energize and remains ON for 5 seconds after that it will OFF for 5 seconds. This cycle will repeat itself.

Program

Here is PLC program for Blinking (ON/OFF) lamp on 5s interval.

List of Inputs/Outputs

Inputs List:-

Outputs List:-

M Memory:-

  • M0.0=bit memory for lamp OFF condition.

Ladder diagram for Blinking (ON/OFF) lamp on 5s interval.

PLC Program for Blinking Lamp - 1

Program Description

In this problem we will consider S7-1200 PLC and TIA portal software for programming.

Network 1 :- In this network when SWITCH (I0.0) is pressed and then lamp OFF condition is not present then lamp (Q0.0) will be ON. So here we used NO contact of SWITCH (I0.0) and NC contact of lamp OFF condition (M0.0).

Network 2 :- In this network when lamp (Q0.0) is ON then TON (ON delay timer) instruction will be executed and it will set the lamp OFF condition. So we have taken here NO contact of lamp (Q0.0), TON timer and programmed time 5s.

Network 3 :- As per our condition lamp OFF condition (M0.0) should be OFF after 5s delay so we have used TON again. So we used NO contact of lamp OFF condition (M0.0) and TON with 5s programmed time.

Runtime Test Cases

PLC Program for Blinking Lamp Simulation

Article by
Bhavesh Diyodara