Showing posts with label Engineering. Show all posts
Showing posts with label Engineering. Show all posts

Substation And Equipment Surge Protection Engineering Essay

Substation and Equipment Surge Protection: Types, characteristics, related calculations, examples with applications for industrial systems

Gautami BhattAbstract—This paper describes the various types of surge protectors, their types and characteristics. This paper will also describes lightning surge arrestors, about them and how the power system is protected against them.

Index Terms—surge, lightening, switching, BIL, insulation, protection, substation

Each electrical equipment should have a long service life of more than 25 years. The conductors are supported on insulators/embedded in insulation system. The internal and external insulation is continually exposed to normal voltages and occasional abnormal voltages. These abnormal voltages include temporary over voltages at power frequency, lightening surges and switching surges.

Over voltages at power frequency have a low over voltage factor but a longer duration while the latter have higher voltage duration and lesser duration. Protection against power frequency over voltages is achieved by employing an over voltage relay at the secondary of a transformer or by using an Inverse definite-Minimum Time Overvoltage Relay.

Protection against transient voltage surges is achieved by the help of Surge arrtestors. The surge arrestors, coordinated spark gaps, surge suppressors, over heard ground wires, neutral earthing, shunt capacitors etc. are located strategically to intercept the lightening surges or to reduce the peak and rate of rise of surges.

Protective systems for the different abnormal voltages act at different speeds depending on the over voltage. Temporary power frequency over voltage occurs for anything between ms to s and hence the over voltage relay acts within 70ms. Lightening surges last for micro seconds and thus typically the surge arrestor acts within 1.2micro seconds. Switching surges are in the range of a couple of hundred micro seconds and surge arrestors for them are typically designed for 100micro seconds.

This paper focuses on lightening surges, their types, protection against them, and the different types of lightning surge arrestors.

Benjamin Franklin (1706-90) performed his famous experiment (1745) of kite flying in thunder cloud. Before his discovery lightening was considered to be “Act of God”. Franklin proved that the lightening stroke was due to discharge of electricity. Franklin also invented lightening rods to be fixed on tall buildings and earthed to protect them from lightening strokes.

The large spark accompanied by light produced by an abrupt, discontinuous discharge of electricity through the air, from the clouds generally under turbulent conditions of atmosphere is called lightening.

Representative values of a lightening stroke:

Voltage: 200MV

Current: 40MA

Duration: 10^-5 sec

KW:8x10^9

KWh:22

Energy:

An overhead conductor accumulates statically induced charge when charged clouds come above the conductor. If the cloud is swept away from its place, the charges on the conductor are released. The charge travels on either sides giving rise to two travelling waves. The earth wire does not prevent such surges.

Another curious phenomenon is the unpredictable paths of lightening strokes. Normally they try to reach the earth and are therefore intercepted by lightning rods, trees, tall structures, etc. Empire state building has been struck by lightning several times. However some lightning strokes do not observe any rules and travel in all sorts of Haphazard fashion.

A B type stroke occurs due to sudden change in the charges of the cloud. If cloud 1 suddenly discharges to cloud 2, there is a sudden change in the charge on cloud 3. A discharge that occurs between cloud 3 and earth is called B stroke. Such stroke does not hit lightening rod, or earth wire. No protection can be provided to the over head line against such strokes.

Attractive effect of Over Head ground wire and earth rods (MASTS):

Earth rods (also called lightning rod) are placed on tall buildings. These are connected to the earth. The positive charges accumulate on the sharp points of the lightning rods; this is why lightning strokes are attracted to them. The earth wires are placed above the over head transmission lines. At every tower this wire is grounded. The positive charges accumulate on this wire. The negatively charged strokes are attracted by the earth wire. In absence of the earth wire the lightening stroke would strike the line conductors causing flashovers in transmission line.

Earth wires do not provide 100% protection. Weak strokes are not attracted by earth wires. B type strokes are not attracted by earth wires. None the less earth wire has proved to be a good solution to very dangerous direct strokes.

Earth wires have a shielding angle. The conductors coming in the shielding zone are protected against direct strokes. The shielding angle is between 30 to 40 degrees. An angle is 35 degrees is said to be economical and satisfactory for Overhead lines.

The equipments in a substation are protected from direct lightning strikes by one of the following ways.

According to IEC masts are preferred for outdoor switchyards upto 33KV. For 66KV and above, the lightning masts become too tall and uneconomical. The overhead shielding wires are preferred because they give adequate protection and the height of structures in the substation provided with overhead shielding wires is comparatively less than that for the lightning masts

The entire switchyard is provided with earthed overhead shielding screen. The size of conductor is usually 7/9SWG, galvanized steel round stranded conductor.

Transmission line conductors are protected by overhead shielding conductor (earthed). The shielding angle (alpha) is defined as follows. A vertical line is drawn from the earth wire. Angle alpha is plotted on each side of this vertical line. The envelope within angle 2alpha is called the zone of protection.

The shielding angle according to ANSI is defined as 30 degrees while in the IEC world it is 45 degrees.

These can be the following: Direct strikes on line conductor, direct stroke on tower top, direct stroke on ground wire and indirect stroke or B stroke on overhead line conductor.

Direct strikes on overhead lines are the most harmful. The voltage being of the order several million volts, the insulators flashover, puncture, and get shattered. The wave travels to both sides shattering line insulators, until the surge is dissipated sufficiently. The wave travels to both sides shattering line insulators, until the surge is dissipated sufficiently. The wave reaches the substation and produces stress on equipment insulators. At times these strikes are prevented from striking the line conductor. All high voltage overhead lines are protected by earth conductors. This mesh covers the complete switchyard.

Direct Strokes on tower-top

Consider,

L = inductance,

I = Current in tower,

R =Effective resistance of tower.

e = voltage surge between tower-top and earth.

So if the change in current with respect to time is 10KA/ and the resistance is 5 ohms and inductance being 10micro Henry. Then e will be 200KV. This surge voltage appears between the tower top and earth. The line conditions are virtually at earth potential because of neutral grounding. Hence voltage appears between the tower top and earth. The line conductors are virtually at earth potential because of neutral grounding. Hence its voltage appears between line conductors and tower-top. If this surge voltage exceeds impulse flash-over level, a flash-over occurs between the tower and the line conductor. Therefore the resistance is kept low for each tower.

A direct stroke on earth wire in the mid-span can cause a flashover between line conductor and earth wire or line conductor and tower.

Indirect strikes on line conductor can have the same effect as direct stroke on conductor. They are more harmful for distribution lines but are not significant for EHV lines. Other factors are low tower footing resistance insulation level of lines. For lines rated above 110KV voltage level, the line insulation is high and back flashovers are rare. For line between 11KV and 33KV, the insulation of lines is relatively low and back flashovers are likely to occur.

Several devices are used in order to protect the power system against lightning surges. An overview of them is given here while some are discussed in detail.

A. Overview of protective devices against lightening surges

Device

Where Applied

Remarks

Rod gaps

across insulator string,

bushing insulator,

support insulator

Difficult to coordinate

Flashover voltage varies by

Create dead short circuit

Cheap

Over heat ground wires (earthed)

Above overhead lines

Above substation area

Provides effective protection against direct strokes on line conductors, towers, substation equipment

Vertical Masts

In substations

Used instead of providing overhead shielding wires

Lightning spikes/rods (earthed)

Above tall buildings

Protects buildings against direct strokes. Angle of protection between 30 to 40

Lightning arrestors

On incoming lines in each substation

Near terminals of transformers and generators

Pole mounted on distribution lines

Diverts overvoltage to earth without causing short-circuit

Used at every voltage level in every substation and for each line

Phase to ground

Surge absorbers

Near rotating machines or switchgear

Across series reactor valves

Resistance capacitance combination absorbs the over voltage surge and reduces steepness of wave

B. Rod gaps

The simplest protection of line insulators, equipment insulators and bushings is given by Rod gaps or coordinating gaps. The conducting rods are provided between line terminal and earthed terminal of the insulator with an adjustable gap. The medium in the gap is air. The rods are approximately 12mm in dia. or square. The gap is adjusted to breakdown at about 20% below flash-over voltage of insulator. The distance between arc path and insulator should be more than 1/3 of the gap length.

Precise protection is not possible by rod gaps. The break-down voltage varies with polarity, steepness and wave-shape, weather. The power frequency currents continue to flow even after the high voltage surge has vanished. This creates an earth fault only to be interrupted by a circuit breaker. Operation of rod gap therefore leads to discontinuity of supply. The advantage of gap is low cost and easy adjustment on site. For more precise operation, surge arrestors are used.

Horngaps, the gap between the horns is less at the bottom and large at the top. An arc is produced at the bottom during high voltage surge. This arc commutes along the horn due to electromagnetic field action and length increases. The arc may blow out.

Impulse ratio of protective devices is the ratio of breakdown voltage on specified impulse wave to breakdown voltage at power frequency.

Typical impulse ratio values are

Sphere gap: 1

Rod gap: 1.6 to 3

Horn gap: 2 to 3

Surge arrestors are usually connected between phase and ground in the distribution system; around the terminals of large medium voltage rotating machines and in HV, EHV, HVDC sub-stations to protect the apparatus insulation from lightning surges and switching surges.

The resistor blocks in the surge arrestor offer low resistance to high voltage surge and divert the high voltage surge to ground. Thereby the insulation of the protected installation is not subjected to the full surge voltage. The surge voltage does not create short-circuit like rod gaps and retains the residual voltage across its terminals.

Surge arrestor discharges current impulse surge to earth and dissipates energy in the form of heat.

After discharging the impulse wave to the earth, the resistor blocks in the surge arrester offers a very high resistance to normal power frequency voltage, acting like an open circuit.

Some of the types of surge arresters being used today in the industry are

Gapped-Silicon-carbide Surge arrestors called the valve-type or conventional Gapped arrestors. These consist of silicon-carbide discs in series with spark gap units.

Zinc-Oxide Gapless Arrestors called the ZnO Arrestors or metal oxide arrestors. These are gapless and consist of Zinc oxide discs in series. ZnO arrestors have superior V/I characteristics and higher energy absorption level. They are preferred for EHV and HVDC installations.

Fig.1-A ZnO surge arrestor[1]

Gap-type Sic Arrestors are connected between phase and earth. It consists of silicon-carbide resistor elements in series with gap elements. The resistor elements offer non-linear resistance at power frequencies, the resistor elements in series offer high resistance with gap elements. The resistor elements offer non linear resistance, at power frequency frequency over voltages, the resistance offered is large. For discharge currents the resistance is low. The gap unit consists of air gaps of appropriate length. During normal voltages, the surge arrestor does not conduct. When a surge wave travelling along the line reaches the surge arrester, the gap breaks down. Since the resistance being offered to it is low, the wave is diverted to earth. After a few micro seconds the normal frequency wave reappears across the arrester. Therefore arc current in gap unit reduces and the voltage across the gap is not enough to keep up the arc. Therefore the current flowing to the earth s automatically interrupted by and normal condition is restored. Thus, the high voltage surge is discharged to earth and the insulation of the equipments connected to it are protected.

Fig.2- Charecteristics of ZnO block[1]

Station Type

Line Type

Distribution Type

Standard normal current peak(A)

10,000

5000

2500:1500

Voltage rating

(Kv rms)

3.3-245

3.3-123

Upto 3.3

Application

Large power stations and large substations

Intermediate and medium substations

Distribution system; rural distribution

Some of the terms and definitions related to surge arrestors are given here in order to better understand the content given in this paper.

Surge Arrestor is a device designed to protect electrical equipment from transient high voltage, to limit the duration and amplitude of the follow current.

Non-linear resistor. The part of the arrester which offers a low resistance to the flow of discharge currents thus limiting the voltage across the arrestor terminals and high resistance to power frequency voltage, thus limiting the magnitude of follow current.

Rated voltage of the arrester is the maximum permissible RMS voltage between the line terminal of the arrestor as designated by the manufacturer.

It should be noted that all equipments are rated by the phase to phase voltage rating but for surge arresters phase to ground rating is the rated voltage.

Follow Current is the current that flows from connected power source through lightening arrester following the passage of the passage of the discharge current

Normal discharge current is the surge current that flows through the surge arrester after the spark over, expressed in crest value (peak value) for a specified wave. This term is used in classifying surge arrester as station type, line type distribution type.

Discharge current is the current flowing through the surge arrester after the spark over.

Power frequency spark-over voltage is the rms value of the power frequency voltage applied between the line and earth terminals of arrester and earth which causes spark over of the series gap.

Impulse spark over voltages. Highest value of voltage attained during an impulse of given polarity, of specified wave shape applied between the line terminal and the earth of an arrester before the flow of discharge current.

Residual Voltage (discharge voltage) is the voltage that appears between the line terminals and earth during the passage of the discharge current.

Rated current of a surge arrester is the maximum impulse current at which the peak discharge residual voltage is determined.

Coefficient of earthing is the ratio of the highest rms voltage of healthy phase to earhh to the phase to phase nominal voltage time hundred expressed in percentage during an earth fault on one phase.

Thus, for an effectively earthed system the coefficient of earthing Ce < 0.8

Therefore surge arrester voltage is

Ua > 0.8 * Um rms

Surge voltage (Vs) KV instantaneous is taken as 2.5 times Critical Flash Over Voltage (CFOV) of line insulation. Therefore discharge current Ia is given by

The following are the list of standard tests performed on a surge arrester according to the IEC

1/50 impulse spark over test.

Wave front impulse sparkover test.

Peak discharge residual voltage at low current.

Peak discharge residual voltage at rated diverter current.

Impulse current withstand test.

Switching-impulse voltage test.

Discharge capability of durability.

Transmission line discharge test.

Low current long-duration test.

Power duty cycle test.

Pressure-relief test.

Acknowledgment

The author would like to sincerely thank and express her gratitude to Prof. Robert Spiewak for his guidance and support and the references he provided.

K.C. Agrawal, Industrial Power engineering applications handbook, Newnes Power Engineering Series

S. Rao, Switchgear Protection and Power systems, Khanna Publications

IEEE Std. 141, IEEE Recommended Practice for electrical Power distribution for industrial plants

Gautami Bhatt (MEE’10) is a M.E.E in Power and Control Engineering from the University of Houston.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Suitable Material For Tubing And Piping Engineering Essay

The metallurgy of tubing is a very important factor while choosing tubing for a particular environment. Generally the tubing is made up of carbon or low alloy steels, martensitic stainless steel, Duplex stainless steel or other corrosion resistant alloys like Nickel-base alloy etc.

METALLURGY FOR TUBING

Carbon steel is an alloy of carbon and iron containing up to 2% carbon and up to 1.65% manganese and residual quantities of other elements.Steels with a total alloying element content of less than about 5% but more than specified for carbon steel are designated as low alloy steel.Carbon steel is the most common alloy used in oil industry because of its relatively low cost.

Though corrosion resistance of these steels is limited still they have been used in oil industry since long satisfactorily. They are suitable for mildly corrosive environments like low partial pressure of CO2 & low partial pressure of H2S.

A material selected for a particular environment may not remain suitable in the case the environmental conditions change.CO2 can cause extreme weight loss corrosion & localized corrosion, H2S can cause sulphide stress cracking and corrosion. Chlorides at high temperature can cause stress corrosion cracking and pitting of metals, while low pH in general increases corrosion rate.

For example the following material are considered to be resistant to sulphide stress cracking :

Low and medium alloy carbon, containing less than 1% nickel.

AISI 300 series stainless steels (Austenitic) that is fully annealed and free of cold work.

The following materials have been found to have little or no resistance to sulphide stress cracking:

AISI Grades 420 and 13% Cr martensitic stainless steel.

All cold finished steels including low and medium alloy steels, many variety of stainless steel.

The limitations of Carbon steel, 9-Cr-1 Mo, 13-Cr, Duplex stainless steel are encountered in various environments and downhole operations.

METALLURGY OPTIONS FOR TUBING

The various metallurgical options examined for tubing and other downhole equipment are Carbon & Low Alloy Steels, 9 Cr-1Mo steel, 13% Cr stainless steel, Duplex Stainless steel and nickel based alloys.

A brief of the suitability and limitations of these materials in various environments encountered in oil and gas wells:

9Cr-1Mo steel

This steel is immune to stress corrosion cracking in the presence of chlorides like other nickel free low alloy steels.

Corrosion resistance of this steel in the presence of H2S is poor. Hence it is not used in tubing metallurgy commonly.

13Cr Stainless steel

This steel can be used upto 100 atms CO2 partial pressure and upto 150 degree Celsius temperature with chloride upto 50 gms/L.

This martensitic grade is known to be susceptible to sulphide stress cracking in sour environment.This material is generally used for sweet wells where minimum souring is expected.

Duplex Stainless Steel

Duplex SS has excellent corrosion resistance in CO2 environment.

The limitation of their usage is their susceptibility to stress corrosion cracking at high temperature and limited resistance to sulphide stress cracking, when H2S is present in the produced fluid.

Nickel Based Alloys

Nickel based alloys are required to be used in extremely corrosive conditions involving very high partial pressure of H2S and CO2 along with presence of free sulphur or oxygen.

SELECTION OF TUBING METALLURGY

From the various metallurgical options I have analyzed, it can be concluded that low alloy carbon steel is not suitable for the wells where high corrosion risk involved, particularly in offshore. If low allow materials were to be used, an intensive corrosion inhibitor treatment program is essential. However, even with the best of programs, the solution to the problem would be trial and error.

Although 9Cr-1 Mo steels are resistant to CO2 attack, they should not be considered for this application since their application in chloride environment is limited up to 10 gms/l (1%).With the high concentrations of chlorides coupled with the high well bore temperature; this material is not suitable for downhole use in these wells.

Duplex stainless steel is susceptible to chloride stress cracking and should not be used with the CaCl2 packer fluid. Also, the price for Duplex material is three to four times the cost of 13 Cr SS material, which would make it economically unacceptable.

Hence, in spite of the additional up-front cost for tubing , it is recommended that based on the caliper survey results , high corrosion risk wells of field should be re-completed with 13% Cr SS L-80 tubing material.

PROBLEMS OBSERVED

The occurrence of metal loss corrosion in pipeline is caused by the presence of corrodents in the produced water. Internal corrosion in pipeline can be caused by the presence of mill scale, slag inclusions, improper heat treatment, improper welding, too high or too low velocity etc. The erosion/corrosion effect can be caused by too high fluid velocity. Water and sludge build develop with too low fluid velocity that may cause pitting and bacteria infestations. At low fluid velocity, water will tend to segregate to the bottom of the pipeline. Once the pipeline is water wetted, the corrosion begins. When corrosion is not controlled, time to first failure due to corrosion will be normally from three to twelve years depending on the wall thickness and operating conditions.

Corrosion of most material is inevitable and can seldom be completely eliminated. But it can be controlled by carefully selecting material and protection methods at the design stage. For example, as carbon steel is less resistance to corrosion allowance is given in addition to the design thickness when they are expected to handle moderately corrosive fluid. Similarly, external surface of the pipeline are protected from corrosive soils by providing protective coatings. Still, there is always unexpected failure which results from one or more of the following reasons :

Poor choice of material

Defective fabrication

Improper design

Inadequate protection/maintenance

Defective material

CONCLUSION

Corrosion due to presence of CO2 gas along with unfavorable water chemistry is the cause of the piping failures.

It is recommended that tubing metallurgy shall be of L-80 13 Cr stainless steel with premium joints.

The downhole metallurgy shall be 13 Cr SS.

Elastomeric material like shall be used for downhole and well head equipment.

These elastomeric materials include:

Nitrile: A rubber compound with base material as Butadiene Acrylonitrile.

Viton : A fluoroelastomer manufactured by Dupont.

Fluorel :A fluoroelastomer manufactured by 3M company.

Ryton : A polyphenylene sulfide manufactured by Philips Petroleum Company.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Secured Products To Function Engineering Essay

 


We need to look at the assembly because it is a key activity in manufacturing, as most products consist of several parts which must be collected together and secured for product to function.


Also it give us degree of movement freedom and mobility for various elements and enable material differentiation .therefore the various assembly methods can be classified into three elementary methods, manual assembly, mechanical assembly and robotic assembly


Assembly is very common, and perhaps regarded as necessary, but we should try to avoid it where ever we can, that way effects have been in over the years to find other methods to avoid assembly.


Due to high cost of labour, another method was considered, which is the automation and its main goals is to integrate various aspects of manufacturing operations to improve quality and minimize time cycle and labour cost.


Also it improves productivity, reduce human involvement, raise level of safety to personnel and reduce cost in raw material.


1.1 Flexibility is where the various individual manufacturing systems are incorporated into a single large scale system, in which the production of parts is controlled with the aid of a computer, and the advantages of this production system is a high flexibility for small effort and short time required to manufacture a new product.


1.2 Manual assembly: in manual assembly, the most significant factors are the sensors available in the form of vision, touch and sometimes hearing, also the ability of the assembler to make sensible judgment very quickly.


For parts with tolerances defects, the judgment becomes important when assembly, and the possibility that exist are: the part inserted can not reach its final location or the pert reaches the final location but does not give the require assembly.


Manual assembly is used in production situation where the work to be performed can be divided into small tasks, and the advantages in using manual assembly is using specialisation of labour by giving each one set of tasks to do repeat ably , this require high labour content therefore results in high cost.


It is a system where mechanical, electrical and computer-based system is used to operate and control production, this technology includes automatic machine tools, automatic assembly machines, industrial robot, storage automatic inspection system, feed back control and computer process control.


Types of automations : Fixed automation ,Flexible automation and programmable automation.


Reasons for Automating : some of these important reasons for automating are as follows :


Increase productivity, this mean grater out pit per hour of labour input , higher production rate achieved with automation than with manual operation.


High cost of labour: is enforcing business leaders to substitute machines for human labour.


Labour shortage.


Safety: by using automation the operation and transferring the operator from an active participation to a super visionary roll, work is made safer.


High cost of raw material: in manufacturing results in the need for greater efficiency is using this material, the reduction of scrap is one of these benefits.


Therefore when large production required quantity and high production rates, automation is used and examples of these products are:


Electric components.


Electronic components.


Bolting plants.


Tablet manufacturing plants etc.


1.4 The advantages of automated systems are:


Reduce labour cost and manufacturing lead time.


Increase labour productivity.


Improve product quality.


Increase production rate.


Reduce material handling cost and time.


Increase manufacturing control.


Improve workers safety.


Overcome limitation of manual labour,


Too expensive.


Some tasks are too difficult to automate.


Problems with physical access to work location.


Short product life cycle.


Usually one of a kind product is produced.


Reduce the risk of product failure.


1.5 The objective of the assignment is try to implement all the knowledge gained in the automation module on the chosen artefact “the electric switch”, and the intention is to disassemble the exercise, study it carefully and design a system to be assembled in large quantities and cost effectiveness by means of automation and manual processes.


Marketing history


The single electric switch is the most common type of switches, as it is found in


every house, office or factory, it is essential to the power source as its simple and


easy to use and economic due to low price.


There are more than one type on this off/on switch, single, double and they are


made of different type materials, plastic ,steel coated , chrome plated etc ,this


makes its prices varies


The main components of the lighting switch are:


Base: it is usually made of plastic material (pvc), and some manufacturer makes


them from chrome plated steel or any other safe long life material.


The switch button: it is the mechanical part of the switch ( acts as actuator), as its main function is to initiate the switch circuit operation (open and close), and is made of the same material as the base. For safety reasons the material should be a very good insulator.


2b- Spring: it is a small spring made of good steel, and is part of the mechanical


action, and assists in switching the power from on to off and vice versa


also due to its elasticity it last along time and prevents contact between solid parts.


Housing: this is the main part of the switch, as it contains all the electrical parts (terminals and their accessories), and is moulded plastic products ,which makes it good insulator to all the power terminals.


3a- Terminal (1) : consists of a block ,element, and a screw for tightening the


electrical wire, brass or copper is usually the martial that terminals are made of ,


and as known are good electricity conductors. (This is the common terminal)


3b- Terminals (2) & (3), made from the same material as terminal (1), the contacts


in all terminal are made of low resistance metal that makes or break the circuit


Each terminal consist of block (3b), element (3b1) and wire fasten screw (3b2).


Screw: fastens the housing assembly to the main base


1


2


2a


3


3a


3a1


3a2


3b


3b1


3b2


4


Base


Button


spring


Housing


Terminal 1(block)


Element


Screw


Terminal 2”block”(2 ea )


Element (2ea)


Screw (2ea)


Fastening screws


1


2


3


4


5


6


7


8


9


Load Assembly base into work carrier


Assembly subassembly Button


Assembly subassembly Terminal 1


Assembly subassembly Terminal 2


Assembly subassembly Terminal 3


Assembly subassembly housing to base


Check


Assembly screw


Remove compete switch


switch


Base


1


Button sub assy 2


Sub assembly housing


3


Screw


4


2


2a


3


Terminal 3a


3b (2 each)


3a


3a 1


3a 2


3b(2)


3b1(2)


3b2 (2)


b) Product structure


2


2b


4


1


3


3a


3a1


3a2


3b


3b1


3b2


c) Assembly structure based components


2


1


6


7


8


9


5


3


4


d) Assembly structure based on subassemblies


Product and Assembly Structure Charts


Component


description


Component number


Functional analysis


Manufacturing analysis


Feeding/loading analysis


Gripping Process


Work holding process


Inspection Process


Non Assembly Process


Sub assembly Total


Assembly Total


1.2


Base


1


A


1.3


2.2


1.5


Button


2


A


1.2


1


1


1


Spring


2a


A


2.1


1


1


Housing


3


A


1.3


2.2


1


Terminal 1


3a


A


1.5


1.5


1


Element


3a1


A


2.4


4


1


Screw


3a2


B


2.1


2.2


1


Terminal 2


3b


A


1.5


1.5


1


Element


3b1


A


2.4


4


1


Screw


3b2


B


2.1


2.2


1


1.5


Screw


4


B


2.1


2.2


1


Total


11


20


23


13.7


Design Efficiency = A component 8 x 100% = 72


Total Compts 11


Feeding Handling Ratio = Index Total ___20_____ = 1.8


A copmts 11


Fitting Ratio = Gripp-fit fix = __________________


A Compts


The design for assembly addresses product structure simplification; sense the total number of parts in a product is a key indicator of product assembly quality


A number on different DFA methods have bee development, and to be any interest to designers they need to be:


Complete i.e. have objectivity and creativity.


Systematic- which helps to ensure that all relevant issues are considered i.e. the organization of objective and creative parts of DFA methods.


Measurable and user-friendly


3.1 Lucas Method the method is based around an” assembly sequence flowchart”. The Lucas/Hull group has developed a knowledge based evaluation technique, it follows a procedure in which the important aspects of assemble and component manufacture are considered and rated. The system is to be integrated into a CAD system, where it should be possible to obtain the information required for the analysis work with the minimum of effort and time.


- Functional analysis


- Handling analysis, and this can be manual or feeding assembly


-Fitting analysis


Depending on this method the Artefact was disassembled and a view drawn shown all components (pieces), also a build up structure and an assembly structure were made (page 6).


3.2 Functional Analysis: is carried out according to the rule of value analysis and activities, degree of functional importance is then categorized.


Each activity is put to the system in turn, a description and name is given for parts.


The assembly parts for the artefact were carefully investigated and categorised into either “A” parts (demand by function) and “B” parts, and these by design only, from that the design for efficiency was:


NO of “A” components x100


Total NO of components


8 x 100 = 72%


11


As all components and subassemblies manufactured in different places and will be presented to same point for assembly so our analysis considered three areas:


Handling difficulties


The size of the component


The weight of the component


The transfer mechanism of a flow line must not only move partially completed work parts or assemblies between stations, it must also orient and locate the parts in the correct position for processing at each station, the general method for transporting can be classified to:


Continuous transfer


Synchronous transfer


Power and free transfer


The most suitable type of transport system for a given application depends on the following factors:


-The type of operation to be performed.


The number of stations on the line.


The work piece size and weight.


There are a various types of parts feeding devises and the most common are:


- Hopper, where components are loaded at the work station, they usually loaded into the hopper in bulk; this means they are randomly orientated in the hopper.


- Parts feeder: This mechanism removes the components from the hopper one at a time for delivery.


- Orientator: where proper orientation is established


- Feed back: used to transfer the components from the hopper and parts feeder to the location of the assembly work head.


The quality of gripping is the ability to hold a part in a way that allows the part to be inserted with the proviso that insertion is possible.


In manual assembly, the parts handling does not have gripping problems because of ability of people to perform insertion operation despite poor relationship between the mating parts.


The best grip must be a three point grip whose lines of action equally


spaced and act through a common point


Another common possibility is a three point grip, where positional errors perpendicular to the direction of grip are possible.


For flexible assembly it is advised to do the following for different tasks:


Use a universal gripper.


Use a turret of gripper.


Use gripper changing.


Use special multi-purpose gripper


The gripping is usually used for the parts witch are difficult to assemble in position due to its size or shape, and this case it is needed when assembly of the power wiring screws and the terminals in the housing.


In manual insertion, the basic insertion action is different to the automatic one. The part being inserted is deliberately misaligned so that contact is established between the mating parts, a combination of touch and sight then interact with the movement to do the operation.


There are three examples show this:


Even in blind situation, one a contact has been made the insertion operation is easy. Attempts by operator to achieve a relatively open tolerance insertion with out mating parts touching are usually unsuccessful.


People are not good at close tolerances.


In automated assembly no touch is needed if there is good alignment.


There are common design roles for assembly processes:


Insert from vertically.


Use chambers, tapers to assist in alignment.


Choose open tolerances as possible.


Do not have more than one insertion site.


Design so the can be released as soon as insertion has started


From the previous analysis tables there are two steps can be taken to redesign the “switch” or artefact:


- The terminal should come as a complete unit, this means the element is welded to the block and the screw in position, this will minimise the steps of the assembly and safe time and cost.


- The housing can be assembled to base by means of” snap fitting instead of the fastening screws.


The outcome of this redesign will result in:


A Reducing parts count


B Ensuring a visible assembly process at a minimum cost


C Reliable automatic assembly achieved


d- Standardisation of components


The FMS provides the efficiency of mass production for batch production, and its main advantages are:


- Increased productivity


- Shorten preparation time for new products


- Reduction of inventory parts


- Saving of labour cost


- Improved product quality


- Attracting skilled people


- Improved operators safety


4.1 Activity Flow Chart


Vibrator Bowel


5


Refuse tray


Poka Yoka


Stack magazine


Linear vibrator


1


4


2


3


Full Ballet


Pallet Magazine


6


ROBOT


Rotary Bowel Feeder


Feed the housing by means of a stack magazine, this magazine must be set up for each “switch” variant. (The housing should be held into the work carrier and secured).


Feed subassembly terminal 1 with the aid of a ballet magazine.


Feed subassembly terminals 2&3 with the aid of a ballet magazine.


Feed the base by the aid of a linear vibrator.


Feed button in base by the aid of vibrator bowel.


Feed the spring by the means of vibratory bowel feeder


Place the subassembly housing on base by means of snap.


Remove of acceptable completed assemblies with the aid of an index transfer system provided with ballets.


The sequences are handled with a Scara Robot with a gripper change system which are used to handle the terminals.


There are 3 work stations in this assembly, the assembly of the housing station, the assembly of the base station, and the third is the completed assembly station


The feed devises used are


Ballet magazine.


Stack magazine.


Linear vibrator.


Vibrator bowel


Poka Yoka : is used to test if the terminals are fitted in position or not.


The advantages of the proposal of re-designing the artefact could be summarized in the following:


Lower manpower cost.


Less automation or feeders) used.


Less time.


More productivity.


More safety


The cost after the re-design proposal should in general be cut down, and regarding the implementation stages there is no transfer from manual to semi-automation noted, but the main changes occurred are in the terminals, as they feed pre-assembled, so this will reduce time ,automated equipment and tooling.


Also the fasten screw is replaced by means of snap fitting, which will result in increase of the “A” numbers and therefore increase in the overhaul efficiency.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Scott Fluid Circuit System Engineering Essay

 


To measure the major head losses of the fluid using Scott Fluid Circuit System. As well as, analyzing the relation between the pressure, velocity and the friction arising from the flow.


In this lab, an approach of Reynolds theory was used to determine the head loss within the flow of the fluid suing Scott fluid circuit. The Experiment was conducted using two different sizes of pipes which were ¾ in and 1 in.


For the ¾ in, the moody chart was referred for the value of friction factor using the roughness and the Reynolds no. The Roughness curve does tend to close out at Reynolds number = 10^5 after which the curve analysis is required to determine the appropriate friction factor. The experiment did conclude the least possible % error for the venturi meter height of 5.25 inches. Other errors does tend to describe the error in the analysis of the data or some other elements factoring the data collection.


According to Reynolds, there are two types of pipe flows:


Laminar Flow


Turbulent Flow


Laminar flows are low in velocity and the fluid particles move in a straight line. Whereas, the turbulent flows are high in velocity and motion of the fluid particles are irregular.


As the fluids are viscous, they lose energy when flowing due to friction. The Pressure loss due to friction is termed as the head loss.


The Flow Rate, Q; Q1 = Q2 = Constant


Or, V1 = V2 = Constant


Change in the Pressure and Gravity can be equated to the head loss, i.e,


Head loss due to friction is in a circular pipe, flowing laminar or turbulent flow for


f=friction factor


L=length of pipe


D=diameter of pipe.


?= Kinematic viscosity


And for the Reynolds number , Re = V*D / ?


e, which is the roughness coefficient related to the roughness of the walls.


And referring to the moody chart relates to the roughness e and the Reynolds number for determining the friction factor, (f).


Manometer


Rotometer


Pump


Venturi Tube


Make sure all equipments are clean.


Task is divided into separate group.


Make sure that the system valves are closed, i.e. If the pipe flow with diameter ¾ inches is used, make sure that the valves other pipes are closed. This way there will be no leakage in the system.


Recording the pressure levels and make sure that there is no back pressure build in the pump and that all flow is continuous.


Sample calculation using major ¾ inches


First convert ?P in inches to ft:


Flow rate = Q =


Flow Rate = Q =


Flow Rate = Q = 0.018364 ft3 / sec


Converting Flow Rate from (ft3 / sec) to (gallons / minute)


448.8311688 ft3/s/g/m * .015 ft3/s = 8.242475 gallons / minute


Velocity = Flow Rate / Area = Q / A = .015 ft3 / sec / .003360986 ft2 = 5.463979 ft / sec


Reynolds Number = (v * D) / ? = (4.556 ft / sec * 1.025 * 1 ft / 12 in) / (.000011 ft2 / sec) = 42428.63


Friction Factor = Recorded using Moody Chart = 0.0217


Head Loss = f * (L / D) * (V2 / 2 * g)


Head Loss = 0.0217 * (5 ft / 1.025 in * 1 ft / 12 in) * ((5.463979 ft/sec) 2 / 2 * 32.2 ft / sec2)


Head Loss = 0.58887 ft


Indicated Head Loss = 0.541667 ft


% Error = [Experimental value – Actual Value] / [Actual Value] * 100 %


% Error = [0.58887 – 0.541667] / [0.541667] *100 % = 8.714433 %


The data collected and calculated results do coagulate the equation, hf = f * (L / d) * (V2 / 2 * g), showing that pipe head loss equals the change in the sum of pressure and gravity head. Hence, it can be said that the friction factor is a function of the Reynolds Number, and hence the roughness factor is valid.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Sun Energy Converted To Electricity Energy Source Engineering Essay

The sun's energy can be converted to electricity to give us a constant source of energy that can be tapped into without fear of power cuts, excessively high electricity bills and to ensure a clean and pollution free environment. This can be done with the help of photovoltaic cells in solar power plants set up especially for this purpose. The solar power plants have solar panels that are made up of numerous solar cells. A solar cell is a small disk made of a semiconductor type of material like silicon chips. They are attached to a circuit by electrical wires. As the sunlight hits the semiconductor, sun's energy in the form of light gets converted into electricity that forms a circuit and flows through the wires. The solar cells can only produce power in the presence of strong sunlight. When there is no light the solar cells stop producing electricity.

Solar batteries are fast becoming popular as a mode of conventional car fuel to help in the conservation of our environment and reduce carbon footprint. It minimizes the chances of being caught on the way with a dead battery and high bills for gasoline and other fuel. Green solar cars are being made popular by famous business house like Toyota, Panasonic, Venturi and others to promote awareness and create a favorable brand image. Federal and state governments also encourage the use of solar hybrid, eco friendly cars by initiating tax benefits.

The theory of solar energy conversion is a modern science that came into existence in 1970s. In order to cater to our ever growing energy needs, various studies have been undertaken in recent times to explore means of developing efficient solar energy conversing techniques. The amount of energy that comes on earth from the Sun is of astonishing quantum-in one second the Sun provides around 1017 joules of energy to Earth also it is equally surprising to know that the Sun provides as much energy to the Earth in one hour that humans need annually. The rate at which the Earth receives solar energy from Sun is 1.2 × 105 terawatts whereas the production rate of energy on earth by man-made techniques is merely 13TW. The quantum of solar energy though received by Earth is unprecedented but the same is not effectively used to cater the energy needs of the civilization. The non-renewable sources of energy like fossil fuels are still used as a major source to satisfy the energy requirements worldwide. Through the process of combustion fossil fuels are turned into useful energy but they tend to produce various greenhouse gases and other pollutants causing certain hazards to the environment. Various facts about solar energy cited below makes it more appealing than any other energy source:

wide availability

versatility

benign effect on the environment and climate

The untapped potential of the solar energy could be harnessed by conversion of solar energy into electricity. Today various studies on energy conversion based on nanomaterials focus on such conversion.

Listed below are the three methods used for the conversion of solar energy into electricity:

1. Solar Energy Cell

2. Solar Energy Collectors

3. Solar Energy Concentrators

The Solar Cells generally known as photovoltaic cells are used to convert sunlight into electricity directly, and the phenomenon is known as the photovoltaic effect. Photovoltaic batteries are made up of thin layers of semi-conducting material placed one above the other. Silicon is the most commonly used semi-conducting material used in photovoltaic cells. Now days, Solar panels have proved their utility in both the residential solar power generation as well as for utility scale power plants. When the surface of the cells faces the sun, the electrons absorb the solar energy in two different semiconductors which in turn creates the electric current.

Modules is a term used to refer to an array of photovoltaic cells that are grouped together for the purpose of creating an energy flow and they are capable of holding around 40 cells. In the process of generating electricity for a building at least 10 such modules need to be mounted together, the number of modules needs to be increased for generating electricity for big constructions like power plant. The 80 MW Sarnia Photovoltaic Power Plant in Canada, is the world’s largest photovoltaic plant.

This process is used to heat the buildings in winters. First of all solar panels are installed on the roof of the building. These panels along with heating up the building also heats up the water pipes being carried in it throughout thereby keeping the water heated up inside the building. The solar energy is therefore directly used to warm the water.

The two main components of a solar water heating system are: solar collector and a storage tank. Storage collector is a flat plated thin rectangular box facing the sun installed on the roof of the building. The solar energy heats up the absorber plate and in turn that heats up the water flowing through tubes inside the collector.

Solar power can also be converted into electricity indirectly through concentrated solar power (CPS).Under this method, mirror configurations are used to convert the solar energy into electricity. Various concentrating techniques are available which include the following:

Parabolic trough

Concentrating linear Fresnel reflector

Dish Stirling

Solar power tower

The parabolic trough technique is the most commonly used technique to collect the solar energy and use it to heat water. By using this technique the sunlight is focused onto a receiver pipe by using parabolic curved mirrors. The receiver pipe runs through the focal point of the curved surface. The working fluid in the pipes gets heated up and a conventional generator is used to produce electricity. The significance of this system lies in the fact that large area of sunlight is focused into a small beam by using lenses and mirrors. The troughs in the collector are aligned on a north-south axis to match the movement of the sun from east to west throughout the day.

The 354 MW SEGS Concentrated solar power plant is in California and is the largest power plant to harness solar energy in the world. Other CSP’s include the Solnova Solar Power Station (150 MW) and the Andasol Solar Power Station (100 MW); both these power stations are in Spain.

After the conversion of the solar energy into electricity it becomes imperative to have proper means to store it to have continuous supply of electricity even when the sunlight is not available. Broadly speaking the solar electricity could be stored either through integration with the grid of the utility company or providing solar batteries to bank the electricity.

This system of storage is used when electricity is being stored on a very large scale. The extra electricity generated in the peak hours get stored in the grid which can be withdrawn whenever required.

The need for storing the additional energy produced by the solar panels for later use necessitates the use of solar batteries. The solar battery stores the excess charge and helps to power a solar driven motor on days when direct sunlight may not be available or even during the night time. Commonly used types of batteries are the Lithium polymer, Lithium ion, Nickel-Cadmium, Nickel-Metal Hydride and the lead-acid batteries. The most efficient of these, however, are the Lithium polymer batteries. They store their electrolyte in an organic solvent state and are non-inflammable and safe to use.

When long power outages from the grid are predicted then battery bank is used to store the electricity produced from the solar energy. This mode of storage is as easy as hooking up the batteries to the transmission grid and the excess solar power can then be stored in the batteries. This is one of the most efficient ways to store power, because rechargeable batteries can store the excess electricity for a longer duration of time. When the solar-electricity is produced, it is sent to the batteries where it gets converted into chemical energy and is stored in a liquid form . At the time of retrieving the electricity from the battery, an electric charge is produced to trigger a chemical process to convert energy back in the form of electrons. Various types of batteries are available to store solar-electric energy and are used in different application areas:

Under Vanadium Redox Flow battery electrical energy is stored in two tanks of electrolytes or fluids that conduct electricity. Such batteries could be used as storage backup for a time span of 12 hours. These batteries could also be used in integrating solar power in a residential neighborhood or at several large industrial sites. At the time of energy requirement the liquid is pumped from one tank to another through a steady process after which the chemical energy from the electrolyte is transformed to electrical energy. During peak periods when there is maximum sunlight this process gets reversed and the excess energy gets stored in the battery. The size of the tank and its capacity to hold the electrolyte influences the quantum of energy that could be stored in the battery.

Under the sodium-beta alumina membrane battery sulfur and sodium are particularly used which serves the purpose of charging and discharging the electricity in/from the battery. The battery’s core is made up of aluminum oxide consisting of sodium ions. The battery is built in tubular design and has the capacity to store lots of energy in a small space. This battery is best suited for powering electric vehicles because of its high energy density, rapid rate of charge and discharge and short, potent bursts of energy.

However, as the battery operates at high temperatures it has been suggested to modify the shape of the battery in order to fix the safety issues and also to improve the efficiency.

Generally Lithium ion, or Li-ion batteries are used in household gadgets and electric vehicles. These batteries are made up of different elements like lithium, manganese and cobalt. These are best suited for transportation applications because of their high energy and power capacity potential .The battery works when positively charged lithium ions migrate through a liquid electrolyte, while electrons flow through an external circuit, both moving back and forth from one side to the other. This movement creates and stores energy.

The Lead-carbon batteries are usually used as back-up generators and in automobiles. Various studies have shown that the lifespan of the traditional lead-acid batteries can be improved by adding carbon in it. Also, such lead-carbon batteries have high concentrated power which makes them suitable for source for solar power. In a normal lead-acid battery, sulfuric acid reacts with the lead anode and cathode to create lead sulfate in the process of discharge. The process reverses during charge. With time the battery’s core gets filled up with lead sulfate due to crystallization. This process of crystallization can be prevented by adding carbon to the battery thereby enhancing the life of the battery.

The choice of using a particular battery from the above explained few depends upon the nature of application and the budget of the project.

A collection of connected 2-, 6-, or 12-volt batteries that supply power to the plant in case of outages or low production of electricity is known as a battery bank. In order to produce the current these batteries are wired together and a series is formed thereby producing 12-, 24-,or 48-volt strings. These strings are then connected together in parallel to make up the entire battery bank. The battery bank supplies DC power to an inverter, which produces AC power that can be used to run appliances. Factors like inverter’s input, type of battery selected amount of energy storage required determines the size of the battery bank.

At the time of installation of new battery, it is suggested to check its life cycle and the number of deep discharges it will be able to provide in future. Also the thickness of lead plates need to be checked upon as the life of battery depends upon the thickness of the plates.

The normal life of batteries is around 10-15 years irrespective of the amount of their usage as the acid in the battery wears down the internal components of the battery. In order to keep the battery working over its entire life following practices must be undertaken:

1. Deep discharging of batteries in repeated intervals must be avoided. The life of a battery is negatively correlated with the number of times it is discharged i.e. the more a battery is discharged, the shorter its lifetime. The other way out to fix this problem is by increasing the size of the battery bank .In order to support deep discharge of batteries every day , the size of battery bank must be increased.

2. Batteries must be stored at controlled temperatures. Rating for battery life is done only for temperatures between 70º-75º. If batteries are kept in temperatures warmer than this it reduces their life significantly. An effective way to heat a battery storage unit is by using passive solar power, but the battery storage unit must also be well insulated. Maintaining the temperature of the battery storage unit below 70º-75º will not extend their lives to any significant degree but will tend to reduce their capacity. Discharged batteries may freeze and burst, so maintain an adequate charge on the batteries in cold weather.

3. Maintain the same charge in all the batteries. Although the entire series of batteries may have an overall charge of 24 volts, some cells may have more or less voltage than neighboring batteries.

4. Inspection of batteries at regular intervals is also required to keep a track of leakage (buildup on the outside of the battery), appropriate fluid levels (for flooded batteries), and equal voltage.

The solar battery should have a constant voltage of approximately a hundred volts to be able to power the solar car engine. The battery pack comprises of several modules wired together. The higher voltage corresponds to better efficiency even though it requires a more complex array. The electronics controlling the power to the car include peak power trackers, the motor controller and data transmission system.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Seismic Analysis Of Steel Concrete Engineering Essay

 


Now a day’s earthquakes are very frequently occurred and maximum loss of life and loss of property occurred due sudden failure of the structure, therefore special attentions are required to evaluate and to improve the seismic performance of multistoried buildings. Hence in this paper the seismic analysis of G+4 story office building is carried out using composite structure in which composite beams (RCC slab rest over steel beams) and composite columns (encased composite columns) are used. The 3-D model static analysis is carried with the help of advanced analysis software (SAP software) according to codal provision by considering different load combination. The results obtained from this type of structure are compared with results of same R.C.C. structure to describe earthquake resistant behavior and performance of the structure.


Such type of constructions has many advantages like high strength, high ductility and stiffness, ease in erection of high rise buildings, fire resistance, and corrosion resistance and helps to achieve modern trend in architectural requirement.


KEY WORDS


Composite structure, problem, composite beams, encased composite column, earthquake analysis, codal provision, different load combination, comparison with RCC building.


INTRODUCTION


In India, earthquakes occurrence have been increased during last few years and it has been studied that maximum loss of life and property occurred due to sudden failure of structure. In composite construction economy of the construction and proper utilization of material is achieved. The numbers of structures are constructed using composite structure in most of the advanced countries like Britain, Japan and America but this technology is largely ignored in India despite its obvious benefits (1).


In composite structure the advantage of bonding property of steel and concrete is taken in to consideration so that they will act as a single unit under loading. In this structure steel is provided at the point where tension is predominant and concrete is provided at the point where compression is predominant. In conventional composite construction, concrete rests over steel beam (2), under load these two component acts independently and a relative slip occurs at the interface of concrete slab and steel beam, which can be eliminated by providing deliberate and appropriate connection between them. So that steel beam and slab act as composite beam and gives behavior same as that of Tee beam. In steel concrete composite columns both steel and concrete resists external loads and helps to limit sway of the building frame and such column occupies less floor area as compared to reinforced concrete columns. The number of studies related to economy of the composite construction shows that the composite construction are economical, light weighted, fire and corrosion resistant and due to fast track construction building can be utilize or occupied earlier as compared to reinforced concrete structure(3).


In this paper an office building considered and seismic analysis is carried using composite beam (RCC slab rest over steel beam), encased composite column (concrete around Hot Rolled steel I section) and the results obtained from this type of structure are compared with the results of same RCC structure.


EXAMPLE OF BUILDING


The building considered is the office building having G+4 stories. Height of each storey is 3.5m. The building has plan dimensions 24 m x 24 m, which is on land area of about 1200 sqm and is symmetric in both orthogonal directions as shown in the figure 1. Separate provisions are made for car parking, security room, pump house and other utilities. However they are excluded from scope of work. The building provision is made for 180 employees and considered to be located in seismic zone III built on hard soil. In composite structure the size of encased composite column is 450mm x 450mm (Indian standard column section SC 250+ 100mm concrete cover), size of primary composite beam is ISMB 450 @72.4 Kg/m and size of secondary composite beam is ISMB 400 @61.6 Kg/m. Here channel shear connector ISMC 75 @ 7.14 Kg/m are used. Concrete slab rest over steel beam having thickness of about 125mm. The unit weights of concrete and masonry are taken as 25 kN/m3 and 20 kN/m3 respectively. Live load intensity is taken as 5 kN/m 2 at each floor level and 2 kN/m2 on roof. Weight of floor finish is considered as 1.875 kN/m2 (4). In RCC structure the size of column is decided by taking equivalent area of encased composite column that is 400mmx 700mm; size of primary beams is 300mm x 600mm and secondary beams is 300mm x 450mm with slab thickness is about 125mm. The unit weights of concrete and masonry are taken as 25 kN/m3 and 20 kN/m3 respectively. Live load intensity is taken as 5 kN/m 2 at each floor level and 2 kN/m2 on roof. Weight of floor finish is considered as 1.875 kN/m2. In the analysis special RC moment-resisting frame (SMRF) is considered.


MODELLING OF BUILDING


The building is modeled using the software SAP 2000. Beams and columns are modeled as two noded beam element with six DOF at each node. Slab is modeled as four noded shell element with six DOF at each node. Walls are modeled by equivalent strut approach (5). The diagonal length of the strut is same as the brick wall diagonal length with the same thickness of strut as brick wall, only width of strut is derived. The strut is assumed to be pinned at both the ends to the confining frame. In the modeling material is considered as an isotropic material.


2.1 Shell Element


Slab modeled as shell element of 125mm thickness having mesh of 1mx1m of this shell element. Material used for shell element is M25 grade cement concrete in both composite and RCC structure


2.2 Beams


In composite structure beams are steel I section from IS code and steel table. The length of each beam is divided into small parts of 1m intervals and connected with concrete slab so as to get composite action. In RCC the length of each concrete beam is divided into small parts of 1m intervals and connected with concrete slab so as to get behavior same as that of Tee beam action.


2.3 Columns


In composite structure column is modeled by giving section properties of both steel and concrete to the software. Also in RCC structure column is modeled by giving sectional properties to the software


ANALYSIS OF BUILDING


Equivalent static analysis is performed on the above 3D model. The lateral loads are calculated and is distributed along the height of the building as per the empirical equations given in the code (IS 1893:2002). The building modeling is done then analyzed by the software SAP 2000. The bending moment and shear force of each beam and column are calculated at each floor and tabulated below.


RESULTS AND DISCUSSION


4.1 Results of Composite Structure:


Floor Level


Max. Shear Force


(kN)


Max. Bending Moment (kN-m)


+ve B M


-ve B M


Plinth Level


73.32


19.64


168.6908


1


177.925


134.31


306.1174


2


175.075


132.34


299.477


3


165.571


132.34


274.038


4


153.64


132.39


236.546


Roof Level


65.59


82.15


125.52


Table 1: Bending Moment and Shear Force of Beam


4.2 Results of RCC Structure:


Floor Level


Max. Shear Force


(kN)


Max. Bending Moment (kN-m)


+ve B M


-ve B M


Plinth Level


115.00


62.45


230.42


1


244.772


177.96


449.82


2


236.744


183.89


418.69


3


223.675


175.28


380.04


4


207.023


174.63


324.58


Roof Level


119.83


115.1004


181.00


Table 2: Bending Moment and Shear Force of Beam


4.3 Results of Composite Structure:


Column No.


Max. Axial Force (kN)


Max. Shear Force (kN)


Max. Bending Moment


(kN-m)


Column-1


1462.307


83.868


251.1801


Column-2


2865.903


101.64


271.4602


Column-3


2828.667


100.091


269.33


Column-4


2865.903


101.64


271.46


Column-5


1462.307


83.87


251.18


Table 3: Axial Force, Shear Force and Bending Moment of Column


4.4 Results of RCC Structure:


Column No.


Max. Axial Force (kN)


Max. Shear Force (kN)


Max. Bending Moment


(kN-m)


Column-1


2453.516


148.942


495.89


Column-2


3526.32


161.64


510.50


Column-3


3538.64


160.995


509.61


Column-4


3519.463


161.83


511.142


Column-5


2455.27


149.047


496.432


Table 4: Axial Force, Shear Force and Bending Moment of Column


From above results of bending moment and shear force of composite structure and RCC structure it is found that bending moment and shear force for composite structure


are less than RCC structure. Hence the cross section area of section and amount of steel for structural element reduced in composite structure than RCC structure so that large space meets for utilization.


CONCLUSIONS


In this paper a three dimensional model is analyzed using SAP 2000 software in terms of the structural characteristics of encased composite column and composite beam. It is concluded that:


The dead weight of composite structure is found to be 15% to 20% less than RCC structure and hence the seismic forces are reduced by 15% to 20%. As the weight of the structure reduces it attracts comparatively less earthquake forces than the RCC structure.


The axial force in composite columns is found to be 20% to 30% less than RCC columns in linear static analysis.


The shear force in composite column is reduced by 28% to 44% and 24% to 40% in transverse and longitudinal directions respectively than the RCC structure in linear static analysis.


The bending moment in composite column in linear static analysis reduces by 22% to 45%.


In composite beams the shear force is reduced by 8% to 28% in linear static analysis.


It also provides fire, corrosion resistance, sufficient strength, ductility and stiffness.


Hence Composite structure is one of the best options for construction of multistory building as well as for earthquake resistant structure.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Suitable For Harnessing Solar Energy Engineering Essay

 


There are several projects like photovoltaic cell in solar panels, solar power concentrator and parabolic dishes devised to harness solar energy in India. These plans prove very fruitful in south eastern parts of India in Tamil Nadu, Karnataka, northern plains in Uttar Pradesh, Bihar, western part of India in Rajasthan and parts of Gujarat. In short, those places which have annual average temperature of more than 25°C are suitable for utilizing this energy into usable power.


In this project we have tried to put up a new way of converting electrical energy directly into mechanical energy without any use of solar cell or mirrors. The idea is supposed to be simple as there is no involvement of dedicated machinery or setup and practically no running cost.


The basic idea lies in the structure and way of using solar energy. In this project, the aim is to convert solar energy into kinetic energy of wind and then use this wind for purposes like rotation of turbine or for any other mechanical work. The basic principle involved is that the heated air is lighter than the cold air. If air gets heated near the surface of earth, then it tries to move up into atmosphere and replace the position of cold air from surrounding. The heated air which moves up, acquires a kinetic energy while moving and can be used to rotate the blades of turbines, if stroked with pressure.


The structure consists of a metallic dish made up of aluminium in which the distance between two diametrically opposite points is 20 meters which has a hole in its centre and is placed on supports such that its outer boundary lies at a distance of 3 meters from ground level. Eight rods made of iron are used as a support, which are inserted in the ground. A frustum shaped pipe with a nozzle at the upper end and lower diameter of 1 meter is used to impart the hot air up to the blades of turbine and a iron pipe of lower diameter 1 meter and length of about 10-15 meters is used to hold this nozzle and turbine. The dish must be designed in such a way that its centre lies 1 meters high from its outer boundary, that is, it forms the shape of a big umbrella roof as shown in figure.


WORKING:


This structure helps us to direct all heated wind toward the centre of the dish and no air is escaped from the outer boundary of dish. The heated air when passed through the centre of dish, tries to move through the thin pipe which further has a nozzle at the end. So the escaping wind moves like a high speed stream in that pipe just like steam as it passes in the thermal power plant, strikes the blade of the turbine which is just in front of nozzle and transfers its momentum and ultimately the blade moves and the next blade comes in place of previous one. Due to the continuous flow of hot air, the turbine starts rotating and hence the connected shaft also starts delivering mechanical output. The cold air moves inside from the bottom of dish and take the place of hot air, gets heated and moves up and thus the cycle continues, and we get mechanical power in the form of rotating shaft of turbine which can be used either to generate electricity, which is our main requirement or in other forms as well, where we need mechanical power, like in pumps or other mechanical machines.


The main difference in this solar power set up and other plants utilizing the same energy is that here we get mechanical energy as the outcome and we can use it in numerous ways but in other projects, like in solar panels and rotating dish we get electrical energy only, this is quite an advantageous part of this project. The other advantage of this project is that we can use the hot air produced through this installation for various other plants and industrial applications like, in drying operation in chemical industries, heat exchangers and preheating of water in thermal power plants and at domestic level it can be used for room heating, driers and water heating which will greatly reduce the domestic power consumption as power consumption in water heating accounts for about 30% of electricity bill and thus this heated gas helps to minimize the electric consumption.


The problem lies in the efficiency of the turbine used, that is, nowadays the turbines manufactured are good for higher speed stream wind but in our model we need a better and lighter turbine which rotates with greater r.p.m with the given speed of wind. As the power output and mechanical work, both depend upon the r.p.m of the rotating shaft so it should be high and for increasing the r.p.m there must be an improvement in our present technology, in the sector of turbine design and material science for improving the quality of materials used for turbine design. The existing industry products require a minimum wind speed of 25 km/hr for production of 10 MW of power, thus we have to improve our technology to reduce this speed.


The other possible disadvantage of this project which is common for most of the solar plants is that, it will not work at night or in rainy seasons. As there will be lesser chances of availability of sun in those periods, hence less heated air production and consequently reduction in movement of air, so power production would get a pause for this period. The idea of storage of energy in rechargeable batteries might prove useful for those periods. There must be regular supervision and checking of the dish to prevent the development of holes and cracks in it, as it would greatly reduce the efficiency of the system by leaking the air. The overall performance would also be greatly affected.


COST, INVESTMENT AND RETURN:


Surface Area of Dish can be calculated as:


(X + 1)2 = 102 + X2


(X2 + 1 +2.X) = 100 + X2


2X = 99


X = 49.5 m


Hence, Radius = 50.5 m


Sin? = (10/50.5)


? = Sin-1(10/50.5)


? = 11.42o


Hence, the ends of the dish make an angle of (2 x 11.42) = 22.84o


Therefore, the Surface area of dish = (?/360) x 4p x (Radius)2


= (22.84/360) x 4 x (22/7) x (50.5)2


= 2034 m2


Since, Thickness of dish =3mm


Therefore, Volume of dish =2034 X.003=6.102 m3


Surface area of circular pipe = p X Diameter of pipe X Height of pipe


=3.14 X 1 X 10m


=31.4 m2


Thickness of Pipe = .005 m


Volume of material used in pipe= 31.4 X .005=0.157 m3


The cost analysis is done by discrete method, that is, by quantizing various costs as follows:


a) Cost of dish= (area of dish) X (thickness of plate) X (Density of metal) X cost of metal per kilogram = 2034m2 X .003 m X 2700kg/m3 X Rs130/kg = Rs 2142000


b) Cost of eight supports= 8 X 3 m X cost per meter of support= 24 X 3 m X Rs 150/m =Rs 10800


c) Cost of pipe= (curved surface area of pipe) X cost of sheet = 31.4 m2 X Rs 200/m2 = Rs 6280


d) Cost of nozzle and frustum = Rs 10000


e) Cost of turbine, blades and shaft= Rs 0.5 million


f) Labour charge= Rs 3 million


g) Maintenance cost and land rent= Rs 1 million


h) Miscellaneous charge= Rs 2-2.5 million rupees


TOTAL expected cost = Rs8.6 million


RETURN ANALYSIS:


In Indian scenario we have almost 300 clear sunny days and on an average 10 hours of bright sunlight available so we can calculate the power output in kilowatt hour as


300 X 10 X 60 X 60 X 10 MW/3.6 X 106


=3 x107 KWh of power


Expected sales rate = Rs 2 per unit


Total expected annual return = Rs 60 million


Expected annual profit: Rs 51 million


After analysing the cost and return, it is found that this project remains profitable over the year and gives a profitable return in the summer season.


EFFECT OF GOVERNMENT INCENTIVES:


The Government of India has a very positive and supportive approach towards the solar power. It provides manufacturers and users of commercial and near commercial technologies, with ‘soft’ loans on favourable terms through the IREDA (Indian Renewable Energy Development Agency). The RBI (Reserve Bank of India) terms the renewable energy industry as “Priority Sector”, and has permitted Indian Companies to accept investment under the ‘automatic route’ without obtaining prior approval from RBI.


The Govt of India also provides exemptions/concessions in the excise tax duty on the manufacture of the solar energy systems and devices such as flat plate solar collectors, solar water heater and systems and any specially designed devices which operate those systems. Their incentives include concession on custom duty, 10 year tax holidays and sales and electricity tax exemption and preferential tariffs. It also includes capital subsidies.


The financial assistance of the Central Government is one of the factors which are highly helpful in the solar power market. It provides up to Rs 50 lakhs per city for a period of 5 years, Rs 20 lakhs for awareness generation, capacity building and other promotional activities, Rs 10 lakhs for preparation of a master plan, setting up institutional arrangements, oversight of implementation during the period of 5 years respectively.


The State Electricity Regulatory Commission has been mandated to source up to 10% of their power from renewable energy sources.


The Government based incentives also provide INR 0.5/ KWh of power sold, for independent power producers with capacity >5MW, for the projects that don’t claim accelerated depreciation benefits.


The State Government has set a remunerative price under power purchase policy for the power generated through solar energy system, fed to the grid by private sector. It also has provisions and policy packages including banking, third party sale and buy-back of solar energy power. The State Government also encourages NGO’s and small entrepreneurs for their participation in solar power market.


In a nutshell, the Indian government provides its support to a larger extent for the development of a sustainable solar power market.


SOLAR POWER MARKET:


India has seen only modest pace of growth , relative to demand , in solar power generation. This has resulted in persistent supply shortages for both urban and rural customers. The main customers of solar power generation market include homeowners, businesses and utility companies.


The demand for electricity in developing countries is growing at a fast pace. The potential worldwide market for solar power over the next 20 years is estimated at 600 GW or 6000 plants of 100 MW solar capacities, most of this in developing countries like India.


Over the next 20 years, It is predicted that there will be actual installations of 45 GW or over twenty 100 MW solar capacity plants per year, assuming niche markets could allow for a 7.5% penetration rate. The actual penetration rate will depend on progress in reducing the cost/performance ratio, support from governments (and the GEF), and energy prices.


The above graph represents the daily utility load profiles for four developing countries: India, Jordan, Egypt and Mexico. The values for India are the average of three regions. After analyzing the above graph, one can forecast that the solar power has a great future in developing countries; hence this plan can be promising for India as it has its maximum consumption in day time so we can rely on this type of setups for our energy needs.


In 2008, the cumulative Solar power capacity was 15 GW. Growth in recent years has been 15% per year. There are estimated 40 million households (2.5% of the total) which were using solar power worldwide in 2004.


China is the leader; 10% of Chinese households use solar power; the target for 2020 being 30%.


In 2008, 65.6% of existing global solar power capacity was in China; followed by European Union (12.3%), Turkey (5.8%), Japan (4.1%) and Israel (2.8%). The Indian share was 1.2%.


The estimated break up of solar power installations in India (till 2009) is as follows:


Residential (80%)


2.108


Hotels (6%)


0.158


Hospitals (3%)


0.079


Industry (6%)


0.158


Other (Railway + Defence + Hostel + Religious places,


other) (5%)


0.132


2.635


The main factors contributing the demand rise in recent years are


• Growth in new urban housing; rising disposable income; increased propensity for consumer durables


• Arrival of ETC & improvements in supply chain


• Energy price hike


• Policy initiatives


According to a survey, five states will lead demand-expansion, as is evident from the following table:


Karnataka


3.72


0.16


3.88


Maharashtra


3.5


0.31


3.80


Tamil Nadu


1.53


0.14


1.67


Andhra Pradesh


1.08


0.09


1.17


Gujarat


0.9


0.06


0.96


%age of 5 states


67.1%


Analysis of demand at the district level shows that a large part of the demand would come from selected urbanized districts. Some of the key districts (out of the 29 surveyed districts) which have large potential for solar power generation market are Bangalore, Pune, NCR, Thane, Hydrabad, Nagpur, Kolkata, Chennai, Coimbatore, Ahmadabad and Jaipur. Among them, Bangalore has highest solar power potential of about 1.94 million m2, in 2022.


Solar power market development plan:


The solar power market development plan is divided into 3 phases:


1. Niche Market Awareness:


The main objectives are to rekindle interest in Solar power generation in India, allow the industry to start-up production processes, determine the current cost and performance of Solar power generation ,and evaluate new Solar power generation concepts to see if they have promise for long term commercialization.


The main activity is to increase market awareness by funding one or two Solar plants in


India. These plants will likely be smaller than the optimum of over 200 MW because of the need to minimize investor risk and to start-up production processes.


It is recommended that the initial market focus should be in those markets where the conditions for solar power generation are currently most promising. Previous experience shows that these conditions are High Solar Resource, High fossil fuel prices, Daytime Peaking Utility, Inefficient Conventional Power Plants, Access to Water and the Grid.


Depending on the cost of power displaced, the financial support to achieve cost parity will range from $400 to $750 million or $550 to $1000 per kW.


2. Market Expansion:


The purpose of the market expansion phase is to develop larger systems to benefit from economies of scale, continue with product development to improve performance and lower costs, create a market large enough that manufacturers can justify construction of production lines, and standardize system designs. A standard design will help to improve the system cost performance by reducing design costs, streamlining equipment procurement and minimizing construction and start-up problems.


In this phase, 3000 MW of additional solar capacity is installed, or fifteen 200 MW plants. The cost of solar power is expected to fall from over 10 cents/kWh to between 7 and 8 cents per kWh.


Depending on the cost of power displaced, the financial support to achieve cost parity will range from $0.5 to $1.8 billion or $350 to $600 per kW.


3. Market acceptance:


The final part in the development plan is the market acceptance phase. The goal for this phase is to set up the necessary market structure so that solar power generating plants can compete with conventional power plants without financial support from the GEF or others.


The investment requirement in this phase is the most difficult to estimate and subject to the


widest variation. The cost of solar generated electricity is expected to fall close to conventional power values. A small difference in solar costs can have a huge impact on the market penetration.


Solar power generation market has the ability to dramatically transform lives. In a country, where 450 million people live without access to electricity and have to depend on kerosene and other alternatives for whatever little lighting they can get at night, solar power as an application in small home lightening system lightens up their lives.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Study Of Multimedia Data Compression Methods Engineering Essay

In this report, there are 2 experiment which are about ‘Audio Compression Using Down sampling and Quantization’ and ‘Image Compression Using JPEG’The aim of the experiment is understand how audio compression and image compression is done using different method and which is more effective. A few of the major findings are, for the audio compression, sampling and quantization determines the quality of the compressed audio.For image compression, DCT and quantization determines the compression ratio and image quality.

Keywords

INTRODUCTION

AUDIO COMPRESSION USING DOWN-SAMPLING AND QUANTIZATION

2.1 Experiment 1: Effect of sampling rate and quantization resolution on sound quality

For multimedia data, it has high redundancy with result in very large file size. A set of discreet sound samples compose an audio date. Quantization and representation by binary code is done to each sound sample obtain. In this experiment, the principles of sampling continuous time signal, increasing or decreasing the sampling rate of a discrete time signal and change the quantization resolution are being explored. Speeches and music audio are recorded with various sampling rates and bits per sample, sampling rate conversion and quantization on digital signals are applied. Each of the sound quality obtained by different filters and quantizers are compared and each of them are observed.

2.1.1 Experiment Procedure

Part 1) Investigate the effect of sampling rate and bits per sample on sound quality

1) Different sampling frequencies and bit rate of audio recording.

a. Windows Media Player on the computer is started.

b. The recording control is configured to change the source of sound input.

The volume control panel is opened. (double-click the “speaker” icon in system tray), select “options-

properties->recording”, then select “CD Audio, Microphone, Stereo Mix” to all these three items in

recording control panel, as shown in Fig. 1, and click “OK”.

In recording panel below(Fig. 2), “Stereo Mix” is select d so the internal sound of the computer can be recorded

The sound recorder is opened (accessory -> entertainment -> sound recorder). The recorder properties are adjusted to set the sampling rate be 11.025 KHz, 8 bits/sample by selecting “file->properties->convert now”( Fig. 4). “PCM” is chosen for “format”, “11.025 KHz, 8 bits, mono” is chosen for “attributes”, type “.wav” for “save as”.

The “record” button is push and 30 seconds of the played audio is record. A file is used to save it using the “.wav” format.

The same audio segment is recorded using different sample rates and bits/sample and the sound quality is compared. The following is how audio quality can be subjectively evaluated:

No

Perceptual Quality

Comment

1

Poor

The sound is corrupted by excessive noise and the sound is no longer understandable

2

Acceptable

The sound is corrupted by moderate noise level

3

Good

The sound quality is perceived to be similar to the original sound

the procedure above for the following recording parameter are repeated and the perceptual quality is commented.

Sampling rate = 11.025 KHz

Music

8 bits/sample

Acceptable

Music

16 bits/sample

Good

Speech

8 bits/sample

Acceptable

Speech

16 bits/sample

Good

Sampling rate = 44.1 KHz

Music

8 bits/sample

Acceptable

Music

16 bits/sample

Good

Speech

8 bits/sample

Acceptable

Speech

16 bits/sample

Good

Here, we can see that the more bits per sample , the better quality is obtain and with the higher sampling rate, the better quality of audio is sampled.

The two programs on a sound file you have recorded earlier can be apply using a sampling frequency of 44KHz sampling frequency, 16 bits/sample.

MATLAB program is started. Directory is changed to your own working directory. All the program files and sample audio files are copied to your directory.

The “down4up4_nofilt” program is used on a sound file recorded with sampling frequency of 44KHz. Type For example : down4up4_nofilt(‘myinput.wav’,’myoutput.wav’)on the command window of MATLAB.

The mean square error is computed by the program between the original and the interpolated signal. The original sound and the one after down-sampling and upsampling are compare the in terms of perceptual sound quality, waveform, frequency spectrum and mean square error. ? The result are tabulated and analyze the result

Repeat the steps from part b. but use “down4up4_filt” instead. The results are tabulated and analyzed.

Without filter

withfilter

MUSIC

Down-sampling/Up-sampling

(No filter used)

Down-sampling/Up-sampling (With filter)

Mean square error

0.000638157

0.000130789

Perceptual Quality

Acceptable

Good

Comment on waveforms

Discrete,The levels are very high

Discrete,The levels are lower

Comment on frequency spectrum

The spectrum is oscillating in a decreasing manner and rise up at the end

The spectrum is oscillating in a decreasing manner

With the fliter, the quality is better due to it reducing the noise effect, thus reducing the mean square error.

Two MATLAB programsare given, quant_uniform.m and quant_mulaw.m. an input sequence using a uniform quantizer with a user-specified number of levels are quantized by the “quant uniform” . The mu- law quantizer is the “quant_mulaw” program to quantize it.

“quant_uniform” function is applied to a sound file recorded with 16 bits/sample to quantize it to a lower quantization levels. As an example to use 256 quantization levels (8 bits per sample) use N=256. Type (example)

quant_uniform(‘myinput.wav’,’myoutput.wav’,256) on the command window of MATLAB.

The original sequence and quantized sequences are compared with different quantization levels ( in terms of perceptual sound quality and the waveform). The quantization error (mean square error between the original and quantized samples) are recorded and compared. determine the minimum ‘N’ to which has a good sound quality.

“quant_mulaw” function is applied to a sound file recorded with 16 bits/sample, with mu=16.

The experiment is repeated for both music and speech audio data. Different quantization levels N are used. determine the minimum N to obtain an acceptable sound quality. Tabulate the result.

N is quantization level

N = 2b

b = number of bits used per level

Uniform quantization

Mu-Law quantization

Mu = 16

Mean square error (mse)

Perceptual quality

poor/acceptable

/good

Mean square error (mse)

N= 212

= 4096

1.17327X108

Good

1.36375X109

N= 210

= 1024

1.8777 X107

Good

2.30712X108

N= 28

= 256

2.95153X106

Acceptable

3.66273X107

N= 26

= 64

5.27229X105

Poor

5.52074X106

N= 24

= 16

0.00112348

Poor

9.02207X105

Uniform quantization(min acceptable) = N= 28

Mu-Law quantization(min acceptable) = N= 26

N is quantization levels

N = 2b

b = number of bits used per level

Uniform quantization

Mu-Law quantization

Mu = 16

Mean square error (mse)

Perceptual quality

poor/acceptable

/good

Mean square error (mse)

N= 212

= 4096

1.15236X108

Good

3.15122X109

N= 210

= 1024

1.84788X107

Acceptable

5.02324X108

N= 28

= 256

2.92896X106

Poor

8.06364X107

N= 26

= 64

4.744 X105

Poor

1.28916X105

N= 24

= 16

0.000769637

Poor

0.000207797

Uniform quantization(min acceptable) = N= 210

Mu-Law quantization(min acceptable) = N= 28

N=16

N=64

N=256

N=1024

N=4096

Mul

N=16

N=64

N=256

N=1024

N=4096

SpeechUni

N=16

N=64

N=256

N=1024

N=4096

MulSpeech

N=16

N=64

N=256

N=1024

N=4096

Here , we can see that the high quantization level, N. the better the sound quality and the error is smaller.

However the Mu-Law quantization is better than the uniform quantization, as the it has smaller distance between two levels at the lower frequency as our ears are more sensitive to lower frequency.

Conclusion is, as seen the experiments above, the higher sampling rate and bits per sample and with filter and using Mu-law quantization give a batter sound quality.

Investigate the effectiveness of JPEG scheme for compressing photographic image

Represent compressed image in frequency domain using DCT transform

Investigate the importance of different DCT coefficients

Investigate the tradeoff in the selection and quantization of DCT coefficients and its effect on compression ratio and image quality.

Discrete Cosine Transform (DCT) is the center of most popular lossy image compression standard. the JPEG compression standard on the internet. how to transform an image into a series of 8 x 8 DCT coefficients blocks, how to quantize the DCT coefficients and then how to reconstruct the image based on the quantized DCT coefficients will experimented. The comparison can be perform between the original image and the decompressed image.

In the Image processing toolbox, the two dimensional discrete cosine transform (DCT) of an image is c computed with dct2 function .The DCT has the property, most of the visually significant information about the image is concentrated in just a few coefficients of the DCT for a typical image. Thus, the DCT is often used in image compression applications.

Ways to compute the DCT, the image processing toolbox offers two different. First is to use dct2 function which uses an FFT based algorithm for quick computation with large inputs. It may be more efficient to use the DCT transform matrix, which is returned by the function dctmtx for small square inputs, such as 8-by-8 or 16-by-16. The M-by-M transform matrix T is given by:

T*A is an M-by-M matrix whose columns which have one dimensional DCT of the columns of A. The two dimensional DCT of A can be computed as,B=T*A*T’, where T’ is the transpose of T. Its inverse is the same as its transpose since T is a real orthonormal matrix.Thus, T’*B*T is the inverse two dimensional DCT of B.

The input image is divided into 8-by-8 or 16-by-16 blocks, and the two dimensional DCT is computed for each block for JPEG image compression algorithm. DCT coefficients are quantized, coded, and transmitted. Then, the quantized DCT coefficients, compute the inverse two-dimensional DCT of each block, and then puts blocks back together into a single image decodes by JPEG receiver. Many of the DCT coefficients have values close to zero for typical images. The quality of the reconstructed image is maintain eventhough these coefficients can be discarded.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread(‘cameraman.tif’);

A = double(A)/255;

B = blkproc(A, [8 8], ‘P1*x*P2’, T, T’);

C = blkproc(B,[8 8], ‘P1.*x’,Mask);

D = blkproc(C, [8 8], ‘P1*x*P2’, T’, T);

imshow(A), figure, imshow(D);

The mini program above, Observe and understand

The grayscale image ‘cameraman.tif’ is compressed and decompressed according to the code above

The difference between the original image and the reconstructed image is observe. Are there any difference noticeable?

observe on the quality of the reconstructed image with several different quantization matrices. What if quantization matrix having all the elements in the is set to 1?

Can all of the DCT coefficients be removed except for the DC value without affecting the quality of the reconstruct images by too much? Discuss your answer.

The above experiments is repeated using ‘rice.tif’ image.

compression and decompression of the grayscale image ‘cameraman.tif’

cameraman

Original image

19

Reconstructed image

After compression, the reconstructed image does not preserve the quality in the original image as it look blurer. This is caused the part which is set to ‘0’ in the quantization matrix during compression.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread(‘cameraman.tif’);

A = double(A)/255;

B = blkproc(A, [8 8], ‘P1*x*P2’, T, T’);

C = blkproc(B,[8 8], ‘P1.*x’,Mask);

D = blkproc(C, [8 8], ‘P1*x*P2’, T’, T);

If all the elements in the quantization matrix is set to 1:

cameraman

Original image

cameraman

Reconstructed image

The Reconstructed image and the Original image are the same .This is caused by when all the parts in the quantization matrix are all set to ‘1’. Thus, no compression is done.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1];

A = imread('cameraman.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If remove all of the DCT coefficients except for the DC value

cameraman

Original image

20

Reconstructed image

We are unable to remove all the DCT coefficient except the DC value without affecting the quality of the reconstructed image. This is because when the DCT coefficient are removed, the high frequency are removed as well. Thus, information are lost. With the DCT gone,it is at a high compression ratio and high quality drop. With this, a very blur with blocking effect image is reconstructed.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('cameraman.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

compression and decompression of the grayscale image ‘rice.tif’

rice

Original image

Fullscreen capture 2172009 112125 AM

Reconstructed image

The reconstructed image is blurred and the quality is reduced. This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If all the elements in the quantization matrix is set to 1:

rice

Original image

rice

Reconstructed image

The Reconstructed image and the Original image are the same . This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If remove all of the DCT coefficients except for the DC value

rice

Original image

Reconstructed image

The reconstructed image is very blurred. . This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

The color image ‘peppers.png’ is loaded into Matlab. They are first converted to the YCbCr color format, followed by sub-sampling of the chrominance (Cb & Cr) channels for the compression of color images. Use the function rgb2ycbcr and ycbcr2rgb to convert to YCbCr color space and vice versa. Use the function imresize to sample the chrominance channels. Use the same mask as in the code above for both the luminance and chrominance channels.

Without any sub-sampling ,perform the DCT and quantization on all the channels then reconstruct the image and the quality of the image is observed.

Using the 4:2:0 chroma sub-sampling,perform the DCT and quantization on every channel then reconstruct the image and the quality of the image is observed.

Any significant differences between the two reconstructed images are there above? Discuss your answer.

A simple function is written that takes an image and a mask as the input, perform compression and decompression on the image (color image compression if input image is color), and display the original and reconstructed images side by side. the SNR of the reconstructed image should be able to compute by the program.

1)DCT and quantization on all the channels without any sub-sampling.

peppers

Original image

23

Compress in RGB format

21

Compress in YCbCr format

Matlab Code : image compression and decompression

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('peppers.png');

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = YCbCr(:,:,2);

Cr = YCbCr(:,:,3);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,2) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,3) = blkproc(C, [8 8], 'P1*x*P2', T', T);

RBG = ycbcr2rgb(uint8(D));

figure, imshow(uint8(D)), figure, imshow(RBG);

2)DCT and quantization on every channel using the 4:2:0 chroma sub-sampling.

24

Compress in RGB format

25

Compress in YCbCr format

Matlab Code : image compression and decompression

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('peppers.png');

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = YCbCr(:,:,2);

Cr = YCbCr(:,:,3);

Cb = imresize(Cb,0.5);

Cr = imresize(Cr,0.5);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,2) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,3) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

RBG = ycbcr2rgb(uint8(D));

figure, imshow(uint8(D)), figure, imshow(RBG);

Comparing the DCT and quantization images in RGB and YCbCr without sub-sampling with the DCT and quantization images with 4:2:0 chroma sub-sampling, there is no difference between them. This is because, the human eye is less sensitive towards Cb and Cr compare to Y, luminance. Even the Cb and Cr is reduced, there are no visible difference the human eye can detect.

3) simple function that takes an image and a mask as the input, perform compression and decompression on the image

function [disp,snr]=codec_snr(file_name,mask)

clear D

A = imread(file_name);

T = dctmtx(8);

if isgray(A)

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

disp = uint8(D);

elseif isrgb(A)

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = imresize(YCbCr(:,:,2),0.5);

Cr = imresize(YCbCr(:,:,3),0.5);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,2) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,3) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

disp = ycbcr2rgb(uint8(D));

end

A = imread(file_name);

sx = double(A).^2;

sd = (double(A)-double(disp)).^2;

snr = 10*log10(mean(sx(:))/mean(sd(:)));

figure('Position',[8 8 1000 500]),subplot(1,2,1),imshow(A);

text(0,0,'Before','HorizontalAlignment','center', 'BackgroundColor',[1 1 1]);

subplot(1,2,2), imshow(disp);

text(0,0,'After','HorizontalAlignment','center', 'BackgroundColor',[1 1 1]);

Discussion:

1.In this experiment, we are finding out how does the image compression works using DCT and quantization and sub-sampling method and how all these affect the quality of the images.

2. JPEG is an effective scheme for compressing photographic image as it reduces information of the image while maintaining image quality. It takes out information that is less sensitive to the human eye thus reducing the size of the file image. By using the right quantization scheme, image quality is preserved.

3. Compression efficiency = (file size of compressed image)/ (file size of original image)

4. Compressed image is stored as quantized DCT coefficients are a suitable approach. The human eye is sensitive to see differences in brightness in large area, but less sensitive to the strength of high frequency brightness variation. The DCT and quantization is used reduce the higher frequency components while avoid losing image quality. 

5. Low frequency is important follow by medium and follow by high frequency. The low frequency is perceptually important thus, require fine quantization.

6. The effect of reducing the number of DCT coefficients on the compression ratio and the image quality is that, the compression ratio increased but the image quality is reduce if the compression ratio is very high. When the quantization matrix has many elements that reduce a high number of high frequency components to zero, the higher the compression ratio. Due to the high reduction of DCT coefficient, some of the information is lost which causes the image to have block effect and quality is reduced.

8. In this experiment, we learned that DCT and quantization can be used for image compression. Different types of quantization matrix can affect the image compression. Sub-sampling can increase the compression ratio. Certain information can be reduce during compression as our eyes are only sensitive to certain information. Certain frequency is more important than the others and required finer quantization.

In the audio experiment, we are able to observe how a sample sound can be recorded while maintaining it’s quality. We are able to understand how each method work and some of their advantages over another. Speech requirement of good recording is lower than music.

In the image compression experiment we are able to understand how different method of compression works and being applied.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com