Showing posts with label Engineering. Show all posts
Showing posts with label Engineering. Show all posts

Seeded In Scaffolding Material Engineering Essay

 


Most of the conventional techniques used to create scaffold fabrication such as fiber bonding, solvent casting and melt moulding [2] yield out with random porous architectures which could not necessarily produce an appropriate homogeneous environment for bone information. Moreover non-uniform microenvironment can provide the region with inadequate nutrient concentration that will make cultured tissue grow in poor cellular activity prevent the formation of homogeneous quality new tissues.


In tissue engineering field, rapid prototyping is one of the most efficient techniques to design and create a highly porous artificial extracellular matrix (ECM) or scaffold that allows accommodating and guiding the proliferation of new cells. A scaffold is a polymeric porous structure made of biodegradable material such as poly-lactic-acid (PLA) and poly-glycolic-acid (PGA) [3]. To regenerate new tissues successfully, the whole process mainly relies on the structural formability of the tissue scaffold and bioreactors to provide appropriate environment for new cell feasibility and function. Rapid prototyping technique is capable to produce complex product quickly from the computer model based on the data of the patient CT. However RP techniques still have limitations and shortcomings such as its mechanical strength, interconnected channel and pores distribution to be resolved [1]. It still need to be improved in order to produce well-defined tissue engineered scaffolds with appropriate chemical and mechanical microenvironments. In this review, we will discuss further developments of RP techniques in tissue engineering based on its major aspects: methods and materials.


Rapid Prototyping Technologies [5]


Rapid prototyping is an advanced technology based typically based on development of computer technology and manufacturing. It is currently being used by investigators to produce scaffolds for use in tissue engineering. Rapid prototyping methods can be categorized into –liquid-based, solid-based and powder-based. In RP process, the 3D model is created layer by layer at a time based on the data defined by a computer-generated till the whole product is complete.


Main systems of RP technique mostly used in tissue engineering fields are


(1) Stereo lithography Apparatus (SLA)


(2) Selective Laser Sintering (SLS)


(3) Fused Deposition Modeling (FDM)


(4) Three-dimensional printing (3-DP)


The advantages and limitations of each of the rapid prototyping technology applied in TE can be summarized as described in table below.


Table. Advantages and limitations of SFF fabrication techniques [5]


Technique


Advantages


Limitations


SLA


-easy to remove support and trapped materials


-can get small features accurately.


-the development of photopolymerizeable and biocompatible, biodegradable liquid polymer material are limited


SLS


-can get good compressive strengths


-have greater material choice


-don’t need to use solvent


-processing temperatures is high


-difficult to remove difficult to remove trapped material in small inner features.


FDM


-no trapped material within small features


-don’t need to use solvent


-can get good compressive strengths


-support material is required for irregular structures


-have anisotropy between XY and Z directions


3D-P


-wider field for material choice


-heat effect is not high on raw material


- difficult to remove difficult to remove trapped material in small inner features.


-need to use toxic organic solvents


-mechanical strength is not good enough


According to the facts described above, it can be clearly seen that main limitations are the use of materials, toxic binders and poor feature symmetry [5].


Selective Laser Sintering Process (SLS) [6]


At first, CAD data files of the object in the .STL file format are transferred to the RP system where they are sliced into layer of equal thickness by mathematically. From this point SLS process start to operate as follow,


-A thin layer of heat-fusible is deposited onto the part-building chamber.


-The bottom-most cross-sectional slice of the CAD part to be fabricated is selectively scanned on the layer of powder by a carbon dioxide laser. The intersection of the laser beam with the powder elevates the temperature to the point of melting, fusing the powder particles to form a solid mass.


-New sintered layer of powder and previously formed layers are fused together to form the object.


Fig shows the process chain of SLS technique.


Fig.1. Schematic layout of the SLS process. [6]


Improvements of SLS process in order to create smaller features by using smaller laser spot size, powder size and thinner layer thickness are expected to produce the desired scaffolds for TE [1]. The degree of easy to remove trapped loose powder is also one of the criteria in current techniques. Existing solutions such as ultrasonic vibration, compressive air, bend blaster, and/or suitable solvent [1].


The bio-materials used by SLS system are non-biocompatible and bio-inert in nature. Because of that fact, SLS application in scaffold production is still limited. Moreover SLS in fabricating TE scaffolds often needs to use organic solvent in order to remove trapped materials [1] which can harm the inner organs when the structure is implanted in human body [6].


K.H. Tan et. al described the bio-compatible polymers such as Polyetheretherketone (PEEK), Poly(vinyl alcohol) (PVA), Polycaprolactone (PCL) and Poly(L-lactic acid) (PLLA) and a bioceramic namely, Hydroxyapatite (HA) to fabricate TE scaffolds [6]. By using these polymers the post-process doesn’t need to use any organic solvent in order to remove trapped material.


The properties and sources of these polymers are described below:


Molecular


Weight


(Mw)


Melting


Point


(Tm)


Glass-


Transitional


Temperature


(Tg)


Density


Avg.


Inherent


Viscosity


Particle


Size


Source


PCL


10,000


60.C


-60.C


Polyscience


Inc. (USA)


PLLA


172.C ~


186.8.C


60.5.C


2.53 dl/g


PURACAsia Pacific Pte.


Ltd [ ]


PEEK


343.C


143.C


25µm


Victrex PIC


Lancashire


UK


PVA


89,000~


98,000


220~


240.C


58~


85.C


100 µm


Aldri


Chemical


Company


HA


3.05 g/cm3


Below


60 µm


Coulter


Counter


Analysis


Among all these bio-materials,HA is highly compatible and can provide well bonding between tissue and the ceramic material[6]. In the process, the released ions of calcium and phosphate ions cause bone-induced osteogenesis and provide the linking of ceramic implant to the bone [6].


The experimental results of their optimized parameters in laser sintering process are described in the table below [6]:


Materials


Part bed Temperature


(.C)


Laser Power


(W)


Scan Speed


(mm/s)


PCL


40.C


2-3


3810


PLLA


60.C


12-15


1270


PVA


65.C


13-15


1270-1778


PEEK


140.C


16-21


5080


PEEK/HA


140.C


16


5080


K.H. Tan et. al reported that in sintering of PEEK/HA bio-composite blend, it is found by reducing the composition percentage of PEEK in the powder made the scaffold to be fragile and that fact made it is not practical to use in laser sintering. Their experiment result shows that the composition percentage of HA 40 wt% can provide the structure in good integrity. And furthermore this composition ratio should be kept at this value in order to get good result.


And from this research it can be clearly seen that (i) part bed temperature, (ii) laser power and (iii) scan speed are three main parameters to control the micro-porosity of the structure [6].


Another powder-based RP technique is 3D-printing (3DP). The bioresorbable polymers and copolymers (based on either polycaprolactone or polylactic and polyglicolic acids) [1, 7, 8] are used in this technique.3D system deposits a liquid binder by means of multiple injects onto a powder bed. The powder particles are glued each other in layer and down to nearly 0.08mm thickness. Depends on the different proposed solutions, biomaterials are incorporated in either the powder bed the liquid binder or post-process infiltrating agent. Well-defined porosity can be achieved by a careful selection of binder printing parameters or by mixing powder with salt, eventually melt in water. 3DP process has the advantage of being conducted at room temperature related to binder toxicity and mechanical strength of built parts.


The process chain of 3D-printing is as shown in Figure below.


Fig. 3-Dimensional Printing Schematic [10]


Organ printing[11]


Based on the concept of 3D-printing technique, one of the development of rapid prototyping technology in TE is organ printing.


Vladimir et al.[11] demonstrated in their report that organ printing is the bio medically relevant technique of RP technology which uses the behaviour of tissue fluidity. Its computer-assisted deposition material is cells, cells aggregates or matrix. The components used in organ printing is jet-based cell printers, cell dispensers or bio plotters, the different types of 3D hydro gels and varying cell types.


Fig- (a) CAD-based cell printer (b) Bovine aortic endothelial cells-printed in 50-micron size drops in a line (c) Cross-section of the p (NIPA-co-DMAEA) gel showing the thickness of sequentially placed layer (d) Real Cell Printer (e) The cell printer connected to a PtdCho via bidirectional parallel together with 9 jets extent of mixing. (f) Endothelial cell aggregate ‘printed’ on collagen before their fusion (g) After their fusion. This information is taken from [11]


The sequential process of organ printing includes (i) preprocessing (ii) processing and (iii) postprocessing[10]. Preprocessing is the development of computer aided design(CAD) or blue print of specific organ structure. The required 3D design can be achieved from digitised image construction of a natural organ or tissue. This imaging data is derived from various modalities such as noninvasive scanning of the human body (e.g. MRI or CT) or a detailed 3D reconstruction of serial section of specific organ. In processing, CAD design of specific organ structure is printed as layer-by-layer by jet-based cell printer. Postprocessing is the process of perfusion of printed organ and its biomechanical conditioning to both direct and accelerated organ maturation.


Fig- Schematic representation of cell printing, assembly and their perfusion of 3D printed vascular unit. Red-Endothelial cells aggregates, Blue-Smooth muscle cell aggregate [11]


Basically organ printing is an advantage of the fusion phenomena of embryonic cell or tissue which are viscoelastic fluids can flow and fuse [11].


Vladimir et al.[11] described that based on achievements: development of a printer which can print cells and/or aggregates; demonstration of procedure for layer-by-layer sequential deposition and solidification of a thermo-reversible gel or matrix and fusion property of embryonic cell, cell aggregate or tissue that if they are put closely fused into ring-like od tube-like structure within the gel; all these achievements and advantage pointed out that organ printing is a feasible advanced technique for TE.


In application of rapid prototyping technique to TE, vascular density of desired organ is one of the most crucial factor for adequate organ perfusion and supply of oxygen and functioning [10]. The tissue-engineered organ could not survive and develop if they don’t have adequate vascularisation.


The authors had recommended that the advantage of organ printing technique is a unique opportunity to eventually print a sophisticated branching vascular tree during the whole process of printing the specific organ.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Subsea Completions And Workover Subsea Trees Engineering Essay

This report will basically centre on how the followings - Subsea Completions and Workover, Subsea Trees and Subsea Processing are applied to maximize oil production in the Gulf of Mexico.

Well completion involves the installation of a production conduit into which has been incorporated various components to allow efficient production, pressure integrity testing etc [2].

Workover -The recompletion of the well to restore production or change the well function [2] or the process of replacement and maintenance operations on the tools in an oil or gas well.

A subsea tree, also called a “wet tree, is an assembly of control valves, gauges and chokes that control oil and gas flow in a completed well” [2]. The tree also enables methanol and chemical injection, pressure and temperature monitoring and allows vertical access for intervention [2].

The removal of unwanted constituents and recovery wanted constituents is called process of hydrocarbon under a condition of pressure and temperature. Subsea processing is the processing of hydrocarbon fluids on the seabed. Processes of these subsea processing involved include water re-injection, multiphase boosting, phase separation gas compression. Not all processes are done offshore; some are still designed for onshore processing.

The Gulf of Mexico region is the arm of the Atlantic Ocean and is bound on the northeast, north and northwest by the Gulf coast of the United States, on the southwest and south by Mexico, and on the southeast by Cuba [3]. In this region, completions and workovers, subsea trees and subsea processing have been designed to do a particular task. The Gulf of Mexico is richly able with hydrocarbon deposits in deepwater. Below is a picture showing the Gulf of Mexico and the countries around the region.

The Gulf of Mexico [4]

Subsea Well Completion involves all the work done on the well prior to production and the installation of subsurface equipment e.g. tubing hanger, blow out preventer (BOP), etc in order to successfully produce from the well. Completion consists of the lower and upper completion processes. The upper completion involves installation of all the various components from the base of the production tubing right to the top, while lower completion takes place around the production area. Some categories of lower completions are:

2.1 Barefoot Completion: This type of completion is suitable for hard rock, multilerals and under balance drilling. Not suitable for weaker formations requiring sand control and wells that require selective isolation of oil, gas and water intervals [5].

Barefoot completion [8]

2.2 Cased Hole Completion: The portion of the wellbore that has had metal casing placed and cemented to protect the open hole from fluids, pressures, wellbore stability problems or a combination of these. This is also the process whereby a casing is run down through the production zone and cemented in place. This type of completion encourages good control of fluid flow [5].

Cased hole completion [7]

2.3 Open Hole Completion: This type of completion is more advantageous in horizontal wells because the technical hitches and the high cost of cemented liners is associated with horizontal wells [5].

The simplest types of oil well or gas well completion, open hole completions have several limitations and disadvantages. Consequently, they are typically limited to special completions in formations capable of withstanding production conditions [6].

Open hole completion [6]

Perforating Guns: This type of component is used to create predefined pattern of perforation in the sides into the reservoir by means of explosive charges, to allow the flow of oil into the well [9]. An example is shown below.

Perforating gun [9]

2 Wellhead: This is the main component that houses the valves that controls fluid from the well to the manifold. It also acts as an interface between the production facility and the reservoir.

Wellhead [10]

Tubing Hanger: This component is located on the top of the manifold provides support for the production tubing. See picture below.

Tubing Hanger [11]

Production Packer: “This is a standard component of the completion hardware of oil and gas well and it is a seal between the tubing and the casing. It is used to isolate one part of the annulus from another for various reasons”. This is done to separate different sections like the gas lifts section from the production section. It is also used in injection wells to isolate the zones. [12].

Production paker [2]

Production tubing: This is the basic channel through which hydrocarbon flows from the reservoir to the surface. The diagram is seen below.

Production Tubing [13]

Downhole Safety Valve: This is used to protect the surface from the uncontrolled release of hydrocarbons. It is a cylindrical valve with either a ball or flapper closing mechanism; it is installed in the production tubing and is held in the open position by hydraulic pressure from surface [5]. See the diagram below.

Downhole Safety Valve [14]

6 Annular Safety Valve: This is needed to isolate the production tubing in order to

prevent the inventory of natural gas downhole from becoming a hazard. See the diagram below.

Annular Safety Valve [15]

Landing Nipples: This is a receptacle to receive wireline tools. It is also a useful marker for depths in the well, which can be difficult to accurately determine as you can see in the diagram below [4].

Landing Nipples [16]

Downhole Guages: This is an electronic or fibre optic sensor to provide continuous monitoring of downhole pressure and temperature. Gauges use a 1/4" control line clamped onto the outside of the tubing string to provide an electrical or fibre optic communication to surface as shown in the diagram below.

Downhole Guage [17]

Wireline Entry Guide: This component is often installed at the end of the tubing (the shoe). It is intended to make pulling out wireline tools easier by offering a guiding surface for the tool string to re-enter the tubing without getting caught on the side of the shoe.The diagram is shown below [5]

Wireline entry guide [18]

Centralizer: In highly deviated wells, this component may be included towards the foot of the completion. It consists of a large collar, which keeps the completion string centralised within the hole [5].

Centralizer [19]

Mensa field is an example of completions in the Gulf of Mexico. It consists of three wells and gathers gas into a manifold and transports it to West Delta 143 platform 68 miles. See the diagrams below [20].

Subsea development [20] Subsea Production manifold [20]

Well Performance Sensitivities [2]

“Reduced production, scale, tubing and components leaks, artificial lift failures e.g. ESP failure, water shut off and re-perforation, change of well function e.g. producer to injector are some events needed for workover operation on a well” [2]. “A brief summary of the completed workovers in the Gulf of Mexico are:

A-10: Cleared debris and zone was re-perforated. Initial production 140bopd with 10/64 chokes. Well continues to produce at a rate of 140 bopd.

A-2: Cleared debris and oil flowed to the surface followed by emulsions. Currently, the well is being analyzed to determine the appropriate solution needed to liquefy the emulsions so that the well can flow without interruption.

A-16: Cleared debris and re-perforated. Well did not produce from existing zone. Currently under analysis to determine if other zones can be considered as candidates for perforation” [21].

This can be classified into three types based on tree Configuration, tree functionality and

tree Installation.

Schematic of the subsea tree [22]

Horizontal Trees

The following below are the features of a horizontal tree

– “The valves are set off to the side.

– Well intervention can be done through them.

– No valves in the vertical bore

– Tree run before the Tubing Hanger

– Tubing Hanger orients from Tree (Passive)

– Internal Tree Cap installed

– Tubing Hanger seals are exposed to well fluids” [23]

Horizontal Tree [24]

Conventional Dual Bore (Vertical) Trees

Below are the features of a dual bore tree:

– “Master & Swab valves in vertical bore

– Tree run after Tubing Hanger

– Tubing Hanger orients from Wellhead or BOP pin (Active)

– External Tree Cap installed

– Tubing hanger seals isolated from well fluids” [23]

Conventional Dual Bore Tree [24]

A third type is the Mudline tree. These are usually used for shallow water applications and typically installed from jack-up rigs. They have minimal hydraulic functions [24].

Trees generally can either be used on production wells or on injection wells. Thus we have

Production Trees

Injection Trees

Trees can be installed either with Guidelines or Without Guidelines.

Examples of installed subsea trees in the Gulf of Mexico are:

This was used at the Shell-operated Silver tip field, part of the Perdido Development located to set a current subsea deepwater completions record of 9,356ft [25].

Enhanced Deepwater Subsea Tree [26]

This was the world’s first 15,000psig subsea tree. The tree was adapted by Cameron from an existing mono-bore mudline tree, with modified components from its 10,000psig tree design [27].

Gyrfalcon Subsea Tree [27] During Installation [27]

This was to be supplied by FMC in the Blind Faith Development which is located in approximately 7,000ft of water [28].

15k Enhanced Horizontal Tree [28]

Troika oil field, located 150 miles offshore Louisiana in Green Canyon 244 unit and lies in water depth of 2700ft made use of conventional, non-TFL, 10,000psi dual bore 4in×2in configuration, installed using guidelines [29]

“Deployment of subsea processing systems has seen a marked acceleration in the past couple of years, with various separation and boosting systems being ordered for deployment in the North Sea , the Gulf of Mexico, West Africa, South America and Australia” [30]. A driving factor for this is cost. Cost reduction is obvious when large and expensive topside facilities are eliminated for the subsea ones. Other drivers include “flow management and flow assurance, accelerated and or increased recovery, development of challenging subsea fields” [31]. Deployments in the Gulf of Mexico include:

Submerged Production System [32]

“The start-up of Aker’s MultiBooster pump technology at a water depth of 5,500ft below surface is expected to boost BP’s production at the King Field by an average of 20%. The MultiBooster system is a subsea multiphase pump module, combining field-proven twin screw technology with Aker’s suite of processing and subsea technology” [33].

Aker Kvaerner’s MultiBooster [33]

Also in the Perdido development, FMC’s scope of work included the supply of subsea caisson separation and boosting system [34]. The gas/liquid caisson separators with ESPs where used because of the fields low reservoir pressure and heavy oil [31].

Gas/Liquid Caisson separator at 2500m/8200ft water depth

for the Perdido Project [31]

Subsea technology and development in the Gulf of Mexico has improved for ages, this is a result of bringing new innovation to move the industry forward and optimize the abundant natural resources beneath the deep water and different technologies involved. This also makes production activities and exploration and in this region to be more fruitful for both operators as well as the marketers.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

The Response Surface Methodology Engineering Essay

 


In previous chapter, the working most important of milling machine, machining parameters which affect the surface roughness, chip thickness formation and factors influence surface roughness in milling machine has been discussed. This chapter gives the detailed overview of response surface methodology with its arithmetical background.


Response Surface Methodology (RSM) is a collection of statistical and geometric techniques useful for developing, improving, and optimizing processes [23].The most far-reaching applications of RSM are in the particular situations whas several input variables potentially influence some performance determine or quality characteristic of the process. Thus performance measure or superiority characteristic was called the response. The input variables are sometimes called self-determining variables, and they are subject to the control of the scientist or engineer. The field of response surface methodology consists of the investigational strategy for exploring the space of the process or independent variables, empirical geometric model to develop an appropriate approximating relationship between the yield and the process variables, and optimization methods for ruling the values of the process variables that produce desirable values of the response.


In this thesis, the concentration on the numerical modeling to develop an apposite approximating model between the response y and self-determining variables


In general, the relationship is


………………………………….………. (3.1)


where the form of the accurate response function f is unknown and perhaps very complicated, and e is a term that represent other source of variability not accounted for in f. Usually e includes possessions such as measurement slip-up on the response, background noise, the effect of other variables, and so on. Usually e is treated as a statistical error, often high and mighty it to have a normal


distribution with mean zero and variance s2 . Then


…… (3.2)


The variables in Equation (3.2) are usually called the accepted variables, because they are expressed in the natural units of measurement, such as degrees Celsius, pound per square inch, etc. In much RSM work it is convenient to make over the natural variables to coded variables, which are usually defined to be dimensionless with connote zero and the same standard deviation. In terms of the coded variables, the response function (3.2) will be written as


? = f ) ……………………………………………………(3.3)


Because the form of the true response function f is unknown, we must ballpark it. In fact, successful use of RSM is critically dependent upon the experimenter’s ability to extend a suitable rough calculation for f. usually, a low-order polynomial in some relatively small region of the independent variable space is apposite. In many cases, either a first-order or a second-order reproduction is used. The first-order model was likely to be appropriate when the experimenter is interested in approximating the true response surface over a comparatively small region of the independent variable space in a location where there is little curvature in f. For the glasses case of two independent variables, the first-order model in requisites of the coded variables is


? = ß0 + ß1x1 + ß2x2…………………………………………...…..….. (3.4)


The form of the first-order model in Equation (3.4) is every now and then called a main effects model, because it includes only the main effects of the two variables x1and x2 . If there is an interaction between these variables, it can be added to the model without problems as follows:


? = ß0 + ß1x1 + ß2 x2+ ß12 x1x2……………………………………….. (3.5)


This is the first-order model with communication. Adding the interaction term introduces curvature into the response function. Often the curvature in the accurate response surface is strong enough that the first-order model (even with the interaction term included) is laughable A second-order model will unlikely be required in these situations. For the case of two variables, the second-order model is


? = ß0 + ß1x1 + ß2 x2+ ß11 + ß22 + ß12 x1x2……………………… (3.6)


This model would likely be useful as an rough calculation to the true response surface in a relatively small region. The second-order model is widely second-hand in response surface methodology for several reasons:


The second-order model is very flexible. It can take on a wide variety of well-designed forms, so it will often work well as an rough calculation to the true response surface.


It is easy to educated guess the parameters (the ß’s) in the second-order model. The method of least squares can be used for this purpose.


There is considerable practical experience indicating that second-order models work in good health in solving real response surface problems.


In general, the first-order model is


? = ß0 + ß1x1 + ß2x2+…+ ßkxk…………………………………………(3.7)


and the second-order model is


? = ß0 + + + ………...………..(3.8)


In some frequent situations, approximating polynomials of order greater than two are used. The general motivation for a polynomial approximation for the correct response function f is based on the Taylor series development around the point x10, x20,……….xk0


Finally, let’s note that there is a close connection between RSM and linear regression analysis. For example, consider the model


? = ß0 + ß1x1 + ß2x2+…+ ßkxk +e………………………………….…(3.9)


The ß’s are a set of mysterious parameters. To estimate the values of these parameters, we must collect data on the system we are studying. Because, in wide-ranging, polynomial models are linear functions of the unknown ß’s, we pass on to the technique as linear regression analysis.


RSM is an important branch of experimental design. RSM is a critical equipment in developing new processes and optimizing their performance. The objectives of inferiority improvement, including reduction of variability and improved process and merchandise performance, can often be accomplished directly using RSM. It is well known that deviation in key performance distinctiveness can result in poor process and product quality. During the 1980s [2, 3] considerable attention has given to process superiority, and methodology was developed for using investigational design, specifically for the following:


For designing or developing products and process so that they are robust to component variation.


For minimizing variability in the output response of a product or a progression around a target value.


For designing products and processes so that they are full-bodied to environment conditions.


By robust means that the product or process performs consistently on target and is moderately insensitive to factors that are difficult to control. Professor Genichi Taguchi [24, 25] used the term robust parameter design (RPD) to describe his approach to this imperative problem. Essentially, strong parameter design methodology prefers to reduce process or product variation by choosing levels of controllable factors (or parameters) that make the arrangement insensitive (or robust) to changes in a set of out of control factors that represent most of the source of variability. Taguchi referred to these uncontrollable factor as noise factors. RSM assumes that these noise factors are disobedient in the field, but can be controlled during process development for purposes of a designed conduct test


Considerable attention have been focused on the methodology advocated by Taguchi, and a number of flaws in his approach have been exposed. However, the framework of response surface methodology allows without problems incorporate many useful concepts in his philosophy [23]. There are also two other full-length books on the area under discussion of RSM [26, 27]. In our technical report we are determined mostly on building and optimizing the empirical models and practically do not consider the problems of investigational design.


Most applications of RSM have sequential in nature. At first some ideas are generate with reference to which factors or variables are likely to be important in response surface study. It is usually called a screening experiment. The objective of aspect screening is to reduce the list of contender variables to a relatively few so that subsequent experiments will be more efficient and require fewer runs or tests. The purpose of this phase is the identification of the imperative self-regulating variables.


The experimenter’s objective is to determine if the current settings of the self-determining variables result in a value of the response that is near the optimum. If the modern settings or levels of the self-determining variables are not consistent with optimum performance, then the experimenter must conclude a set of adjustments to the process variables that will move the process toward the optimum. This phase of RSM makes significant use of the first-order model and an optimization technique called the method of steepest gradient (descent).


Phase 2 begins when the process was near the optimum. At this point the experimenter usually wants a model that will accurately approximate the accurate response function within a relatively small region approximately the optimum. Because the true response surface usually exhibit curvature near the optimum, a second-order model (or perhaps some higher-order polynomial) should be used. Once an appropriate reminiscent of model has been obtained, this model may be analyzed to determine the optimum conditions for the process. This in order experimental process is usually perform within some region of the independent variable breathing space called the operability region or experimentation region or region of concentration


Multiple linear regression (MLR) is a method second-hand to model the linear relationship between a dependent variable and one or more autonomous variables. The dependent changeable is sometimes also called the predicted, and the independent variables the predictors. MLR is based on least squares: the representation is fit such that the sum-of-squares of differences of experimental and predicted values is minimized. The relationship between a set of independent variables and the response y is strong-minded by a mathematical model called regression model. When there are additional than two independent variables the regression model is called multiple-regression model. In general, a multiple-regression model with q independent changeable takes the form of


Yi = ß0 + ß1xi1 + ß2xi2 + ……………. + ßqxiq + ei (i = 1, 2, ………, N)


Yi = ß0 + jxij + ei ( j= 1, 2,………,q)


Where n > q. The parameter ßi measures the probable change in response y per unit increase in xi when the other independent variables are held unvarying. The ith observation and jth level of independent variable is denoted by xij. The data organization for the multiple regression model is shown in Table 3.5.


Table 3.1: Data for Multiple-Regression Model


y x1 x2 ….. xq


y1 x11 x12 ….. x1q


y1 x11 x12 ….. x1q


yn xn1 xn2 ….. xnq


Box-Behnken designs has rotatable designs that also fit a full quadratic model but use just three levels of both factor. Design-Expert offers Box-Behnken designs for three to seven factor These designs have need of only two levels, three levels, coded as -1, 0, and +1. Box and Behnken created this design by combine two-level factorial designs with incomplete building block designs. This procedure creates designs with desirable arithmetical properties, but, most importantly, with only a fraction of the experiment needed for a jam-packed three-level factorial. These design offer limited blocking options, apart from for the three-factor version.


Box-Behnken designs necessitate that a lower number of actual experiments be performed, which facilitates probing into probable interactions between the parameters studied .Box- Behnken is composed of a spherical, gyrating design. It consists of a central point and the middle points of the edges of a cube hemmed in on a sphere. It contains three interlocking factorial designs and a central point. In the absent work, the two-level, three-factorial Box-Behnken experimental design is applied to consider process parameters


3.1.5 Analysis of Variance (ANOVA)


The purpose of the statistical analysis of variance (ANOVA) is to consider which design parameter significantly affects the Surface Roughness. Based on the ANOVA, the qualified magnitude of the machining parameters with respect to Surface Roughness is investigated to determine more accurately the best possible combination of the machining parameters.


Analysis of variance (ANOVA) uses the same intangible framework as linear regression. The main difference comes from the nature of the illuminating variables: instead of quantitative, here they are qualitative. In ANOVA, instructive variables are often called factors. If p is the number of factor, the ANOVA model is written as follows:


Yi=ßo + ………………………………………………… (3.1)


Where yi is the value experimental for the dependent variable for observation i, k(i,j) is the index of the grouping of factor j for observation i, and ei is the error of the model. The hypothesis used in ANOVA are identical to those second-hand in linear regression: the errors ei follow the same normal distribution N (0, s) and are independent.


The way the model with this suggestion added is written means that, within the framework of the linear regression model, the y is had the expression of random variables with stand for µi and variance s², where


+ k(I,j),j ……………………………………………...….(3.2)


To use the various tests proposed in the results of linear regression, it is not compulsory to check on second thoughts that the underlying hypotheses have been correctly verified. The normality of the residues can be tartan by analyzing certain charts or by using a normality test. The independence of the residues can be checked by analyze certain charts or by using the Durbin Watson assessment.


Interactions: By interaction was meant an artificial factor (not measured) which reflect the interaction between at least two calculated factors. To make a parallel with linear regression, the interactions are equivalent to the products between the permanent explanatory values although here obtaining interactions requires nothing more than simple multiplication between two variables. However, the information used to represent the interaction between factor A and factor B is A*B. The interactions to be used in the representation can be easily defined in DOE++ software.


Nested effects: When constraints thwart us from crossing every level of one factor with every level of the other factor, nested factor can be used. We say we have a nested effect when smaller number than all levels of one factor occur within each level of the other factor. An example of this might be if we want to study the effects of similar machines and different operators on some output characteristic, but we can't have the operators revolutionize the machines they run. In this case, each operator is not cross with each machine but rather only runs one machine. DOE++ software has an automatic device to find nested factors and one nested reason can be included in the model.


Balanced and unbalanced ANOVA: We talk of balanced ANOVA when for each one factor (and interaction if available) the number of observations within each category is the same. When this is not true, the ANOVA is said to be unbalanced. DOE++ software can handle mutually cases.


Random effects: Random factors can be included in an ANOVA. When some factor are supposed to be accidental, DOE++ software displays the expected mean squares table.


Constraints: During the calculations, each factor was broken down into a sub-matrix containing as many column as there had category in the factor. Typically, this is a full disjunctive table. Nevertheless, the stop working poses a problem: if there are g categories, the rank of this sub-matrix is not g but g-1. This leads to the prerequisite to delete one of the columns of the sub-matrix and possibly to make over the other columns. Several strategies are available depending on the elucidation we want to make afterwards:


a1=0: the parameter for the first grouping is null. This choice allows us force the effect of the first category as a ordinary In this case, the constant of the model is equal to the indicate of the dependent variable for group 1.


an=0: the parameter for the last category is null. This choice allows us force the effect of the last category as a ordinary. In this case, the constant of the model is equal to the mean of the dependent variable for group g.


Sum (ai) = 0: the sum of the parameters is null. This choice forces the unvarying of the model to be equal to the mean of the dependent variable when the ANOVA is balanced.


Sum (ni.ai) = 0: the sum of the parameters is unfounded This choice forces the constant of the model to be equal to the mean of the needy variable even when the ANOVA is unbalanced.


Note: even if the choice of constraint influences the values of the parameters, it have no effect on the predicted values and on the different fitting information.


Multiple Comparisons Tests: One of the main applications of ANOVA is multiple comparison testing whose aim is to confirm if the parameters for the various categories of a factor differ extensively or not. For example, in the case where four treatments are applied to plants, we want to know not only if the treatments have a significant effect, but also if the treatment have dissimilar effects. Numerous tests have been proposed for compare the means of categories. The majority of these tests assume that the sample is more often than not distributed. DOE++ software provides the main tests counting


Summary of the variables selection: Where a collection method has been chosen, DOE++ software displays the selection summing up. For a stepwise selection, the information corresponding to the similar steps are displayed. Where the best model for a number of variables varying from p to q has been selected, the unsurpassed model for each number or variables is display with the analogous statistics and the best model for the decisive factor chosen is displayed in bold.


Observations: The number of explanation used in the calculations. In the formulas shown below, n is the number of observations.


Sum of weights: The sum of the weights of the observations second-hand in the calculations. In the formula shown below, W is the sum of the weights.


DF: The number of degrees of freedom for the preferred model (corresponding to the error part).


R²: The strength of mind coefficient for the model. This coefficient, whose value is between 0 and 1, is only displayed if the constant of the representation has not been fixed by the user. Its value is defined by:


- , where =


The R² is interpret as the proportion of the variability of the dependent variable explained by the model. The nearer R² is to 1, the better is the model. The predicament with the R² is that it does not take into financial credit the number of variables used to fit the model.


Adjusted R²: The adjusted strength of mind coefficient for the model. The adjusted R² can be negative if the R² is near to zero. This coefficient is only calculated if the constant of the reproduction has not been fixed by the user. Its value is defined by:


The adjusted R² is a correction to the R² which takes into description the number of variables used in the model. The analysis of variance table is used to appraise the explanatory power of the explanatory variables. Where the irregular of the model is not set to a given value, the explanatory power is evaluated by compare the fit (as regards least squares) of the final model with the fit of the elementary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the assessment is made with respect to the model for which the dependent variable is equal to the unvarying which has been set.


The predictions and residuals table shows, for each observation, its weight, the value of the qualitative illustrative variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals, the self-confidence intervals together with the fitted prediction and Cook's D if the corresponding options have been activate in the dialog sachet. Two types of confidence interval are displayed: a confidence interval in the order of the mean (analogous to the case where the prediction would be made for an infinite number of observations with a set of given values for the illustrative variables) and an interval around the isolated prediction (analogous to the case of an isolated prediction for the values given for the explanatory variables). The second distance is always greater than the first, the random values being larger.


In this chapter, the detailed overview of response surface methodology is presented. The mathematical background of RSM is presented. The various types of RSM methods like, Box-Benken Design, Multiple regression and ANOVA model are described.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Seismic Design Of Industrial Rack Clad Buildings Engineering Essay

This paper describes the development of over strength factor and ductility for high level storage system called rack clad building (RCB) system. Unlike the steel storage structures which are common in superstores, these structures are built outside and the outer most frame is used for supporting cladding. As these structures have frequent interaction with people, they pose a great threat towards public safety during any windstorm or earthquake event. Several research works have been done on steel storage rack structures but not on RCB systems and currently no seismic design guideline exists for designing RCB structures. The over strength factor is an important parameter required for calculating design seismic force for a type of structure. The RCB structures generally use teardrop connectors at the beam column joint. These connection systems have semi rigid behavior and shows very different hysteresis behavior compared to a conventional joint. For simulating this behavior in finite element model, nonlinear behavior has been introduced using moment rotation data from a previously done laboratory experiment. Using this experimental data a set of three dimensional models have been generated and several nonlinear static analysis have been performed to determine the over strength factor and ductility with varying heights and bay lengths.

Steel storage racks in supermarket, hardware stores and handy man stores have become very common in Canada. These places are visited by people every day. Due to high proximity of these structures to people, these structures pose a great threat towards public safety. During earthquake these structures if not properly designed to withstand the inertia force can collapse and injure people. Until now very little effort has been put into the Seismic design of these structures.

As these structures are an integral part of everydays public activity the importance of a proper design guideline for these structures is very high. As rack structures are generally located inside of a larger structures wind forces were generally ignored and there was reluctance in considering seismic loading also. The National Building Code of Canada (NBCC, 2005) recognizes the seismic risk of rack storage systems and recommends that seismic provisions be provided while designing these types of structures. FEMA 460, 2005 provides seismic guidelines for designing these storage structures. However RCB is a new type of steel storage structure which is generally installed outside of a building and the sides and roof of the structure is used as the wall and roof of the structure. These types of structures are called rack clad building systems. This idea of using the Rack structures peripheral frame as a wall reduces the need for a larger storage structure for the protection of the racks which significantly reduces cost. This type of structure is getting popular because of low cost and rapid rate of construction. Rack clad building has to withstand the full force of earthquake or wind. For these structures wind forces cannot be ignored and they have to be properly designed against lateral forces as they pose higher risk towards public safety compared to conventional steel storage racks. There are some guidelines in practice for designing steel storage rack system but there are no similar standard in place for designing RCB system against seismic and wind loading. This research is very important as it is going to be a great help for structural designers and construction industry of Canada.

As the number of superstore and warehouses getting increased and public access to them also becoming frequent, safety is becoming a major concern. Safety and security of the citizens of a country is very important and this is also the primary objective of this analysis. The objective of the proposed RCB system analysis is to develop a standard design guideline for the structural design practitioners, contractors and the construction industry. To develop mathematical model several finite element models have been developed. From the finite element model the over strength, force reduction factor, natural time period and ductility have been calculated which are some very important parameters of seismic design. These parameters will be used for calculating seismic base shear for future RCB frame designs and also help in member size proportioning. The expected design performance level of this structure will be used as collapse prevention against maximum considered earthquake.

As the RCB frame comes with elements containing holes at regular interval, the frame elements lose stiffness. So the stiffness of the frame elements have been reduced in the model to take account of the preinstalled holes in the frames. A simple model of a frame element has been generated in FEM software using shell elements to calculate the stiffness with holes and without holes and thus the relative stiffness have been calculated. Using the relative stiffness, several RCB frame models have been produced using line elements and analyzed using computer simulation. The analysis has been carried out using nonlinear static procedure. The results produced from the FEM model was checked against existing test results from the published literature. The beam column joint behavior strength was simulated using the FEM model and checked against the previous experimental values from literature. A hysteresis load deflection relationship curve was produced for result verification and further studies.

This research was carried out to produce a design guideline which is going to enable the design practitioners design the RCB frames based on a solid ground. The standard design methodology for RCB system will enable the designers to achieve life safety performance level against design basis earthquake with minimum time and cost implication. With desired level of performance level these structures will be safer in public interaction during any severe wind load or seismic event. Also by achieving the desired level of performance we will be able to reduce the risk of overdesign and as well as cost.

The first step of the guideline is to calculate over strength and ductility of RCB systems. The second step is the calculation of the natural time period and force reduction factor. The following figure shows these factors and how their relationships.

: Over strength, force reduction factor and ductility

Generally racking systems consists cold-rolled steel sections. The frame system consists of upright posts with holes at regular interval for connecting beams on one side and braces on the other side. They rely on portal frame action in the down-aisle direction and frame action in the cross-aisle direction to resist lateral loads. The story height can vary depending on the stock required to be stored [7]. The RCB structure under consideration has a story height of 1600mm. A typical arrangement of a racking system is given in the following figure.

Beam

Diagonal

Pallet support bar

Guard Corner

Frame

Drum Chock

Plywood clipboard

Galvanized steel shelf panel

Base plate

: Basic components

The frame system used in the down-aisle direction of steel storage racks which uses teardrop beam to upright connection, although appear similar to steel moment-resisting frames defined in the 2003 NEHRP commended Provisions FEMA 2004, behave very differently than the connection system commonly used in buildings. Generally moment resisting connections in buildings are designed to cause inelastic deformations in the beams away from the beam column joint, but this inelastic behavior occurs directly in the beam-to-column connections in RCB structures. [6]

In rack industry, the columns are called uprights. Although the system exhibits highly nonlinear behavior up to very large relative rotations between the beams and column, it remains almost elastic in the sense that the behavior does not cause permanent deformation in the beams and uprights joint. The inelastic rotation capacity of beam-to-upright connections is significantly high and for the connection under consideration has exceeded 0.06 radians and some researcher [6] found out that it can be as high as 0.2 radians. In general building moment-resisting connections have inelastic rotation capacity in the range of 0.04 radians for special moment-frame systems. However, the rotational demands on rack moment resisting connections are much greater than that that of buildings because of the relatively short height of rack structures for comparable fundamental time periods. Therefore, the high rotational capacity of beam-to upright moment-resisting connections is necessary in order for the structure to withstand strong earthquake ground motions. [6]

The performance expectations and design intentions of the 2003 NEHRP Recommended Provisions: “The design earthquake ground motions specified herein could result in both structural and nonstructural damage. For most structures designed and constructed to these provisions and constructed according to these provisions, structural damage from design earthquake ground motion will be repairable although perhaps not economically so. The actual ability to accomplish these goals depends upon a number of factors including the structural framing type configuration, materials, and as-build details of construction for ground motions larger than the design levels; the intent of these Provisions is that there is a low likelihood of structural collapse.” [6]

The performance expectations can be stated for the structural design of steel storage racks as follows; the rack structures have a low probability of collapse when subjected to the Maximum Considered Earthquake or MCE ground motions. Storage racks are currently designed using equivalent lateral force procedures that use reduced Design Basis Earthquake DBE ground motions. Collapse prevention at the MCE ground motions is taken to be 1.5 times larger than the DBE ground motions, is not completely based on solid mathematical ground and only based on past experience. As the inelastic behaviors of rack structural members and connections are significantly different from building structural systems, it would be desirable that in addition to the equivalent DBE lateral force design, a check of collapse prevention at the MCE be explicitly made [6].

In the following figure a side view of a RCB structure is shown which shows the use of braces in the down isle direction

: Side view of a RCB with braces

In the subsequent figures some important components of RCB are shown

: Spacer beams connecting two racks

: Typical upright post detail

: Typical upright post to beam connection [4]

The posts are made of 1.8mm, 2mm, 2.6mm and 3mm thick steel. The shape of the section is shown the figure above. Beams are generally rectangular box section with thickness varying from 1.5mm and 1.8mm. The beam depth ranges from 72mm to 150mm. The width is generally 50 mm.

Braces are made of ‘C’ sections with typically two types of sections 45mmX30mmX2mm and 60mmX30mmX4mm. These braces are generally connected with the upright with a single nut and a bolt.

For computer modeling of the actual beam column joints moment rotation data were used from [1]. The moment rotation behavior of beam column connection is shown below.

: Double cantilever test setup

The experimental moment rotation plots for different combinations are shown below. From these moment rotation graphs the one suitable for the project under consideration was selected,

: Moment rotation plots for varying column thickness and beam depths for a 4 lipped connector

which is the curve corresponding to 2.5UT-4L-100BD. An idealized curve was plotted with secant stiffness and strain hardening slope. The idealized curve is shown below.

: Experimental and Idealized moment rotation curve

Several analytical models are generated in finite element modeling software to calculate section properties and to simulate the beam-to-upright joint nonlinear moment rotation behavior.

This calculation was carried out to eliminate the need for modeling the post section with holes for the full structural model. Modeling with holes requires shell element based modeling for the column section, which is time consuming and impractical from analysis point of view. The approximate section properties of the post section were calculated partially using computer model and hand calculation. And a relationship has been developed between the section with and without hole. . The calculated properties are moment of inertia, shear area, average cross sectional area and torsional constant. Below is a FE post model with holes

: A cross section and a finite element model of an upright with hole

Some calculated section property is shown in the table below

Moment of Inertia about 2 axis

90.35%

Moment of Inertia about 3 axis

86.11%

Average cross sectional area

95.60%

Torsional Constant

98.16%

Shear area in 2 direction

84.39%

Shear area in 3 direction

89.22%

: Relative stiffness with respect to section without holes

In order to take account of the non linear moment rotation behavior into account, a non linear hinge has been modeled in the FE software. The hinge model was tested using a beam column joint. The hinge was inserted at the end of the beam and a non linear static load was monotonically applied until the hinge reached its ultimate capacity. The output from the finite element model is shown below.

: Beam column joint model

: Simulated moment rotation behaviour in the model

The frames were created using FEM software fully capable of dealing with nonlinear material property and geometrical nonlinearity. The beam column joint rotation property is simulated using nonlinear plastic hinges and they were assigned at the beam column joint. The steel plastic hinge behavior is used in the critical length of columns to form plastic hinges after yield moment is reached. Axial nonlinearity (Axial P hinge) is used for braces so that they take considerably lower load in compression. This nonlinear object can automatically calculate the buckling load and can make the braces ineffective after the buckling load has reached. The pushover analysis that is used here is nonlinear static in nature. The load is applied in a specified direction using a accelaration in that direction and subsequent roof top displacement and base shear is monitored until the structure reaches its ultimate capacity. With the monitored data the following curves are generated. Some pushover curves were genereated with self weight only others are with self weight plus content weight. Content weight is 2000Kg per tray which equates to 4.35KN/m for the beams.

Fig 13: Two dimensional analysis model for down isle direction

Fig 14: Two dimensional analysis model for cross isle direction

Fig 15: Three dimensional analysis model of a single rack in down isle direction (without braces)

Fig 16: Three dimensional braced model with 2X4 bays

Single frame pushover analysis in downisle and cross isle direction. For these analysis the content weight was assumed zero.

Fig 17: Pushover curve for down isle direction (self weight only)

The calculated overstrength factor for the above mentioned frame is 2.5 and ductility is 2.9

Fig 18: Pushover curve for down isle direction (1/3rd content weight)

The calculated overstrength factor for the above mentioned frame is 2.0 and ductility is 2.6

Fig 19: Pushover curve for down isle direction (2/3rd content weight)

The calculated overstrength factor for the above mentioned frame is 4.4 and ductility is 2.8

Fig 20: Pushover curve for down isle direction (full content weight)

The overstrength factor could not be calculated as the beam to upright connections got completely plastisized due to content load alone but the ductility facctor was calculatead to be 2.5

Fig 21: Pushover curve for cross isle direction (Self weight only)

Overstrength factor and ductility for the above mentioned frame is 1.48 and 1 respectively.

Fig 22: Pushover curve of a single rack in down isle direction (Self weight only)

Overstrength factor and ductility for the above mentioned frame is 1.3 and 1.9 respectively.

Fig 24: Pushover curve of a 2X4 unbraced 3d model in down isle direction (Self weight )

Overstrength factor and ductility in cross isle direction for the above mentioned frame is 2.5 and 1.45 respectively.

Fig 25: Pushover curve of a 2X4 fully braced 3d model in down isle direction (Self weight )

Overstrength factor and ductility for the above mentioned frame is 1.9 and 1.0 respectively.

Fig 24: Pushover curve of a 2X4 fully braced 3d model in cross isle direction (Self weight )

Overstrength factor and ductility for the above mentioned frame is 1.3 and 1.33 respectively.

Down isle

2D unbraced

Self weight (SW)

2.5

2.9

Down isle

2D unbraced

SW+1/3rd content

2

2.6

Down isle

2D unbraced

SW+2/3rd content

4.4

2.8

Down isle

2D unbraced

SW+full content

Indeterminate

2.5

Down isle

2D unbraced

SW

1.3

1.9

Cross isle

2D unbraced

SW

1.48

1

Table 3: Overstrength and ductiliity for various types of configuration

From the above mentioned study done for RCB frames it is found out that the overstrength factor is a function of the content weight in the down isle direction and it varies from 1.3 to 4.4, on the other hand the ductility has a range from 1 to 2.9. For full content weight all the beam column joint became plastisized only due to gravity load and hence the overstrength factor could not be calculated. So it is highly recommended that the racks should not be loaded to their full capacity in any situation.

For cross isle direction the frame behaviour is totally governed by the performance of the braces. The buckling failure of the braces are the critical events during a pushover analysis. It was observed that the ductility is only on in this direction which implies that the frame is almost elastic in the cross isle direction upto failure. And the calculated overstrength is 1.48 in cross isle direction.

The most important parameter is the overstrength factor of the fully braced 3d model. It was found out that the overstrength factor in the cross isle direction is 1.3 and ductility is 1.33 and in down isle direction they are 1.9 and 1.0 respectively. The ductility 1.0 in the down isle direction very different from the single frame ductility which was 2.9. It is because the braces are present in the full 3D model. Which induces a fully linear base shear vs roof displacement response to the structure. Due to the braces the structure is unable to freely move laterally and the displacement we get is actually the axial elongation of the braces. On the other hand the same structure without braces shows 23 times more dformation in the down isle direction.

A full scale model will be generated for a RCB system and Incremental dynamic analysis will be carried out to calculate the force reduction factor. For these studies some ground motion data will be selected to best represent the seismicity of this region. For this further study the following nonlinear behavior is simulated in the model to take account of the pinched hysteresis with very low residual deformation observed during the experiment.

As the nonlinear hinge modeled in the FE software cannot take reverse cyclic loading, a Multi linear plastic link model was generated using elastic-plastic property and pivot hysteresis [3] behavior. The pivot hysteresis behavior best represents the moment rotation behavior of the RCB frames beam column joint which is semi rigid in nature. This model is able simulate hysteresis which has very low residual deformation which makes it unique among other hysteresis methods available. This plastic link will be used for incremental dynamic analysis in further studies.

Fig 20: Pivot hysteresis of RCB beam column joint

Fig 21: Simulated hysteresis comparison with experimental plot (Black line represents simulated reults)



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Study Of Developments Of Green Ship Design Engineering Essay

Shipping is the primary means of transport worldwide. We, in Europe, rely on it for goods and travelling from one corner of our continent to the other. Today’s globalised world trade would not be able to function without ships, after all approxi- mately 70% of the earth’s surface is covered by water. Considering the staggering percentages of world trade vessels transport (80%), it is remarkable to note that shipping is already the most environmentally friendly mode of transport and that emissions emitted from ships are small (3%). Operational pollution has been reduced to a negligible amount. MARPOL 73/78 is the most important set of international rules dealing with the environment and the mitigation of ships pollution, it has dealt with certain issues. However, there have also been consider- able improvements in the effi ciency of engines, ship hull designs, propulsion, leading to a decrease of emissions and increase of fuel effi ciency. The environmental footprint of shipping has been signifi - cantly improved through inputs from the marine equipment industry, which adopts a holistic approach when looking at the maritime sector. The equipment suppliers are a valued contributor and innovator within the maritime cluster. The shipbuilding sector encompasses the shipyards and the marine equipment manufacturers including service and knowledge providers. The European marine equipment industry is the global leader in propulsion, cargo handling, communication, automation and environmental systems.

The marine equipment sector comprises of all products and services necessary for the operation, building, con- version and maintenance of ships (seagoing and inland waterways). This includes technical services in the fi eld of engineering, installation and commissioning, and lifecycle management of ships. The value of the products, services and systems on board a vessel can exceed 70% (85% for cruise ships) of the value of a ship. The production ranges from fabrication of steel and other basic materials to the development and supply of engines and propulsion systems, cargo handling systems, gen- eral machinery and associated equipment, environmental and safety systems, electronic equipment incorporating sophisticated control systems, advanced telecommuni- cations equipment and IT. Thus the marine equipment industry supports the whole marine value chain and stake- holders: from the port infrastructure and operation to the ship/shore interface, shipbuilding and ship maintenance.

A large part of the improvements in the environmental footprint of shipping is achieved through the efforts of the European marine equipment industry. A major challenge for the industry today is to ‘transfer technology’ from laboratories to ships, in order to reduce harmful emissions and obtain the benefi t to wider soci- ety. Investments in upgrading older ships are necessary to make them ‘greener’ and more effi cient also in view of setting a benchmark for future new-buildings. A short term objective for the marine equipment sector is to be able to improve energy effi ciency of ships by around 30%. In the medium to long term it has been estimated that a ship’s energy effi ciency can be improved by 60%. These ambitious targets can, however, only be achieved by a continuous innovation process and through increased co- operation between the actors within the maritime cluster.

Shipping has proved to be an effi cient mode of transport throughout history: cutting journey times, building larger vessels to carry more goods, and moving to the combustion engine from the age of steam. Ship-owners, in particular European ones, in cooperation with European shipbuilders (yards and marine equipment) have opted for effi cient and high tech products; this is why the European marine equipment sector is now globally one of the most advanced and innovative, although much more can be achieved. There is already technology existing to help mitigate the environmental impacts from ships. The equipment manufacturers have to maintain levels of investment for new tech- nologies, especially in the present economic climate. Future regulation for the ‘greening’ of shipping is likely to be adopted at inter- national level in the very near future. This could provide a benchmark for further innovation and ensures a high level of technical design resulting in better prod- ucts.

The aim of this book is to provide the reader with a look at currently existing green technology and the impact it has on the environment from a neutral stand- point. Further developed it could provide a benchmark for the current capabilities of technology and if integrated onboard vessels show what they could achieve above and beyond current regulatory requirements. If this technology could be integrated in today’s ships then they could become 15-20% greener and cleaner. If there is further demonstration of newly researched and developed technology then a 33%+ eco-friendliness could be achieved ultimately leading to the zero emissions ship in the not too distant future.

There are 7 issues that should be taken into consideration when talking about re- ducing the environmental impact of vessels1:

“Green ship” is a name given to any sea going vessel that contributes towards improving the present environmental condition in some way or the other. The word “green” in “Green Ship” signifies the green cover of the earth, which is unfortunately reducing as a result of the increase of human intervention in environmental activities.

Maritime industry is one of the greatest contributors of the green house effect, a phenomenon that has drastically affected the earth’s natural ecosystem. Thus, as an effort to reduce carbon emissions coming from the maritime industry and also to support the world movement towards eradicating the green house effect, many shipyards around the world have started inculcating special methods and equipments in their ships, which not only helps in minimizing the carbon foot prints but also in increasing the ships efficiency. These environmentally- friendly ships are known as “Green Ships.”

The greatest contributor of environmental pollution on a ship is the ship’s engine room. The diesel engines and other machinery present in the engine room utilize fuel for their working and release carbon dioxide and other poisonous gases in return. The key to reduce this poisonous emission is to improve the design of these machines and also of the ship. The ships should be designed in such a way that it poses least threat to the environment. Thus, better the design, greener is the ship.

A greener and efficiently designed ship can be achieved by

Minimizing the consumption of materials during ship building.

Reducing the usage of energy and toxic materials during ship manufacturing process.

Using efficient machinery

Improving the overall ship design

Reusing of ship’s parts and accessories during ship maintenance.

Hull design and the kind of materials used in making a ship play a very important role towards the overall efficiency of the ship. For e.g., optimization of hull lines of the ship increases the speed of the ship, saves fuel and also improves the economic efficiency.

Green ship technology means using methods that reduce emission and energy consumption during ship construction processes such as hull construction, painting and fitting. Moreover, a green ship should also abide by all the rules and regulations related to environmental protection and conservation. Thus, if it’s a green ship then special attention is provided during its manufacturing and service processes.

As mentioned earlier, improving the marine machinery is yet another method for making a ship green. The marine equipments chosen for a green ship should consume less energy, emit less pollution and have higher efficiency. This can be done by concentrating on technical aspects of machines such as boilers, main engine, generators, air conditioning system, air compressors etc.

A green ship also means using new technologies such as advanced hull and propeller systems, exhaust gas scrubber systems, waste recovery system, exhaust gas recirculation system etc. Apart from this, use of right grade of fuel for a particular engine also reduces carbon emission and fuel consumption. This also results in less routine maintenance, demanding reduced human labor and energy.

Moreover, there are many new technologies that have completely changed the way a ship works, apart from reducing the carbon emission. A few examples of such green technologies are – the electric propulsion system, which uses an electric management system to improve the overall efficiency of the ship while reducing the exhaust; advanced green diesel engines, which consume less fuel, reduce carbon emission and produce least vibration and noise etc.

Thus, there are many methods for making a green ship green. Also, with the continuous increase in global warming, shipyards around the world are making extra efforts, in their own ways, to contribute towards mitigating the rising environmental concern. Therefore, it can be said that until the conditions related to green house effect don’t improve, the concept of “green ships” is here to stay.

Scientists have agreed to the necessity to limit Global Warming to 2 deg. C. A temperature increase of 2-4 deg. C will lead to:

increased droughts in certain areas.

increased precipitation in other areas.

more frequent and violent hurricanes.

A temperature increase of more than 4 deg. C will most likely change the planet as we know it today.

Kyoto Annex I countries have agreed to reduce GHG by 5.2% by 2012 compared with 1990 levels EU has proposed a 20-20-20, i.e. 20% reduction (compared to 1990 levels) by 2020 Scientists suggest a 50% reduction in GHG emissions by 2050 in order to limit Global Warming

to 2 deg. C.

How much CO2 comes from shipping?

• Two recent studies:

• IMO Expert Group on Air Pollution

• 2009 IMO Greenhouse Gas update study.

• Both use 2007 as reference year.

How much CO2 comes from shipping?

Two recent studies:

2007: 1100 mill. t

2020: 1400 mill. t

Shipping accounts for 3-4% of the total anthropogenic* CO2.

(*produced by human activities)

According to BIMCO company/Organization

Shipping emissions Shipping is projected to increase its GHG (CO2) emissions by approx. 25% from 2007 – 2020.

What are the options for shipping to reduce

CO2 emissions?

1. Improve efficiency

2. Reduce trade (slow steaming, lay-up)

3. Market Based Instruments (MBI)

• It was estimated by the IMO Expert Group that fuel efficiency of new ships can be increased in the order of 30-40%.

• Existing ships can gain 10%.

• Slow-steaming is very efficient, but will limit trade.

• Given the predicted growth in shipping, fuel consumption is estimated to increase with 24% - 28% between 2007 and 2020.

• If shipping is required to reduce its emissions, it cannot be done by technical and operational measures alone without disrupting world trade

• Market Based Instruments (MBI) will need to be applied in the form of Emission Trading or Fuel Levy.

• ETS are part of the Kyoto Protocol and are utilized in several land-based industries.

• EU has developed its own ETS.

• Aviation and Shipping were exempted from regulation by the Kyoto Protocol.

• In July 2008 the EU Parliament decided to include Aviation in the EU ETS.

• Several EU MEPs have expressed a need of also including Shipping in the EU ETS.

• IMO discussed a proposal for the establishment of a Global Shipping ETS at MEPC 59 in July 2009.

During 2009, the partners of Green Ship of the Future decided to work together on a concept study of so-called ‘low emission ships’. The purpose of the study was to investigate the possible overall emission reductions when the various available technologies from the Green Ship of the Future project were implemented already during the design phase of a new ship.

Studies were carried out for two different ship types, an 8,500 TEU container vessel and a 35,000 DWT handy size bulk carrier. The basis for the container vessel was a A-Type vessel from Odense Steel Shipyard, while the basis for the bulk carrier was a Seahorse 35 bulk carrier from Grontmij|CarlBro with a capacity of 35,000 TDW.

In the concept studies, only available and proven ‘green’ technologies were used, which meant that it was possible to build the ships as specified and documented by the two task-leading companies of the concept studies, Odense Steel Shipyard and Grontmij | Carl Bro.

The concept studies were carried out to benchmark the new technologies in relation to the goal of Green Ship of the Future (reduction of exhaust gas emissions) and in relation to the coming international regulations on NOX and SOX emissions and most probably also CO2 emissions by introduction of the Energy Efficiency Design Index (EEDI) for new ships.

Designing a ship is a very complex process because many aspects and constraints have to be taken into account simultaneously. Very often demands interfere with each other in a negative way so that by fulfilling one demand, another demand cannot be fulfilled or is even counteracted.

This interference means that it is not always possible just to accumulate the savings from each individual technology to get the total possible saving or reduction. In the present summary, focus has been on the following technologies:

Sulphur scrubber system

Liquefied natural gas as fuel

Advanced hull paint

Waste heat recovery (WHR)

Water in fuel system (WIF)

Exhaust gas recirculation (EGR)

Other main engine technologies

Optimization of pump and cooling water systems

Advanced rudder and propeller designs

Speed nozzle

To ensure that the two concept ships fulfil the relevant Class regulations, all calculations and drawings have been approved by Lloyds Register, and each ship has thus been given a Class Notation.

A well-designed propeller and rudder system can save up to approximately 4% of the fuel oil consumption. Such a system could be a modern propeller combined with an asymmetric rudder and a so-called Costa Bulb.

With new propeller design methods modern propellers becomes more and more efficient. The Costa Bulb creates a smoother slipstream from the propeller to the rudder. With an asymmetric rudder, the rotational energy from the propeller is utilised more efficient compared to a conventional rudder.

Normally, nozzles are used to improve the bollard pull on tugs, supply vessels, fishing boats and many other vessels which need high pulling power at low speed.

This new kind of nozzle, called a speed nozzle, is developed to improve the propulsion power at service speed. Using the new speed nozzle concept has a saving potential of approximately 5%.

One way to fulfil the future regulations on sulphur emissions is to install an exhaust gas scrubber. This scrubber system use water to wash the sulphur out of the exhaust gas. Measurements have shown that SOx emissions are reduced with up to 98%. It is not only the sulphur which is reduced, also the content of harmful particles are reduced by approximately 80%.

Normally, the electrical power in harbour condition is supplied by using auxiliary engines running on heavy fuel or marine diesel. By using auxiliary engines running on LNG (liquefied natural gas) instead of conventional fuel, significant emission reductions can be achieved.

Emission reductions in the magnitude of approximately 20% on CO2, approximately 35% on NOx and 100% on SOx are the potential of switching from diesel to LNG.

The choice of the right hull paint is essential to keep the resistance at a minimum. Modern anti-fouling hull paint with a low water friction has a fuel saving potential in the region of 3 to 8%.

The reduction of emissions is proportional to the fuel savings.

The waste heat recovery system utilises the heat in the exhaust gas from the main engine. The exhaust gas contains a lot of heat energy which can be transformed into steam. The steam can then be used for heating of the accommodation, cargo areas and fuel oil. The steam can also be used for power generation in a turbo generator. Depending on the configuration, a waste heat recovery system can reduce the fuel consumption by 7 – 14 %.

The formation of NOx is dependent of the temperature in the cylinder liner. By lowering the temperature the NOx emissions are also lowered. By adding water to the fuel before injection, the temperature in the cylinder will be lowered. This will result in a reduction of NOX by 30-35%.

The formation of NOX emissions can be reduced by lowering the temperature in the cylinder liner of the main engine. One way of lowering the temperature is to recirculate some of the exhaust gas. Some of the exhaust gas is mixed with the scavenge air so that the oxygen content is reduced together with a lower temperature in the combustion chamber. Measurements have shown that this technology have a potential of NOX reductions of approximately 80%.

By using an optimised cooling water system it is possible to save up to 20% of the electrical generated power, corresponding to approximately 1.5% reduction of the total fuel consumption. Studies show that the resistance in the cooling water system often can be reduced. When the resistance is reduced smaller pumps can be used and thereby saving up to approximately 90% of the power needed for pumps.

’Green Ship of the Future’ is a Danish joint industry project for innovation and demonstration of technologies and methods that makes shipping more environmental friendly.

With respect to airborne emission the aim of the project is

to provide the necessary technologies and operational

means to reduce emissions as follows for new buildings:

30 % reduction of CO2 emissions

90 % reduction of NOx emissions

90 % reduction of SOx emissions

Turbo charging with variable nozzle rings results in high efficiency in a wider load range compared to traditional turbo chargers, especially at low engine loads, i.e. low speeds. Together with Maersk ABB has installed the new A100 VTG turbo charger with variable nozzle onboard Alexander Maersk. The system are currently undergoing tests but initial conclusions are very positive. Next stage for turbocharging is with two-stage turbo charging, which is currently being developed by ABB.

Optimisation of WHR system in close cooperation with partners. Determination of vessel operation profile and optimisation of engine for improved exhaust gas data. Installation of new exhaust gas fired boiler, turbo generator (steam/gas turbine and generator). Optimisation of WHR system given the available space constraints. Maersk is currently installing WHR on a wide range of vessels based upon the GSF project.

Re-design pump & auxiliary systems with a focus on power consumption. Introduce automated systems that continuously control the power demand.

In two projects, optimised control algorithms for Reefer systems (joint project with Lodam A/S) and for general High Temperature (HT) and low

temperature (LT) onboard refrigeration systems are being developed by Aalborg University. The latter system is designed for a Maersk newbuilding, and the effect is documented by means of advanced simulations. Potential: The project is still at an early stage, but preliminary results indicate significant energy savings, possibly as much as 45% (rough estimate)

GreenSteam is a new energy saving system for ships, providing reduction in energy consumption by adjusting ship trim and power. Based on readings from multiple sensors over a period of time, the relations between

the dynamically changing conditions and the energy requirements are mapped and analysed into a mathematical model. This model is used for onboard guidance to the crew as regards optimum trim and power. Fuel savings of at least 2.5% have been demonstrated onboard a product tanker owned by DS NORDEN A/S. The system will

be installed on 4 or more new NORDEN vessels during 2010.

The air resistance of a Bulk Carrier is approximately 5-8 percent of the total resistance. By advanced wind tunnel studies and optimization of the superstructure the air resistance will be lowered to a minimum. The following steps is included in the project:

• Wind Tunnel test of existing design.

• Superstructure optimization (eg. Crane, forecastle, accommodation –

Rounded shapes, elimination of recirculation zones etc.)

• The future bulk carrier where all traditions are reconsidered…

Based upon the results investigations might continue on other vessel types.

SeaTrim is a trim optimisation application based on model test results of a large matrix of different combinations of draught, trim and speed. SeaTrend is a system for performance monitoring, using operational data from the ship. With SeaTrim & SeaTrend installed onboard the six L-Class chemical tankers owned operated by Nordic tankers, it is the aim of the project to demonstrate the effect of the tools in terms of:

Ability to determine hull and propeller fouling and trends.

Ability to guide the crew as regards to optimum trim.

MAN Diesel’s propulsion division in Frederikshavn has developed a new nozzle, which can enhance the performance for many types of vessels. Where existing nozzles designs have primarily been applied to ships that requires high thrust at low ship speeds, the new product is intended for vessels with a higher service speed i.e. tankers, bulkers, PSVs etc. The new nozzle will be tested in model scale on a tanker that is operated by Nordic Tankers. The test will be carried out in the towing tank at FORCE Technology.

HEMPEL and FORCE Technology has made an official agreement to monitor all new applications of HEMPASIL X3 with the SeaTrend performance monitoring software. Currently a number of vessels have been applied with both X3 paint and SeaTrend

software. Based on the experience from the project the effect of the newest generation of silicone paints will be documented in real service.

The advanced version is specially suited for Short Sea Shipping and will allow the officers to plan their route taking into account ETA, weather (wind, waves and current), and shallow waters. With highly detailed weather prognoses of the North Sea and Baltic Sea (supplied by DMI) and with a GPS link, SeaPlanner continuously monitors and guides the Master on the optimum speed and heading. With this project DFDS and FORCE Technology will show the potential of the SeaPlanner based upon the experience gained through the initial operation. The system is currently installed on 7 vessels and will be installed on additional 15 vessels in spring 2010.

‘Lab on a ship’ (LOAS) is a new and innovative product by NanoNord. During bunkering LOAS provides online measurements of the elements of the bunker oil, lube oil, cylinder oil etc. In addition the system offers online measurements of exhaust gas emissions of NOx and SOx. With the LOAS system, the sulphur content of both the bunker oil and the exhaust emissions are measured and documented which is important for the verification of the MARPOL Annex IV regulations. LOAS is installed onboard two Bulk Carriers owned by Lauritzen Bulkers, and the project aims at demonstrating the applicability of the system.

The challenge was to take an existing modern design and evaluate the technologies suitable and to generate a picture of the improved performance of the vessel. We have evaluated two different vessel types. We have not changed the hull form, the DWT or other main parameters.

• Speed nozzle/optimized propeller

• Twisted spade rudder with Costa bulb

• Water in fuel (WIF)

• Exhaust gas recirculation (EGR)

• Waste Heat Recovery system (WHR)

• Exhaust Gas Scrubber

• Ducted/direct air intake for main engine

• Optimised coolers and cooling pumps

• Auxiliary engine operation on marine diesel oil (MDO)

• High capacity fresh water generator.

Extra costs 5 mill USD (Corresponds to approx 20% of newbuilding costs)

8500 TEU container vessel, optimised with:

Water in Fuel technology (WIF)

Exhaust gas recycling (EGR)

Waste heat recovery exhaust boilers

Power and Steam turbine technology

Exhaust gas Scrubber

Extra costs 10 mill Euro (Corresponds to approx 10% of newbuilding costs)

With respect to NOx and SOx it is possible to reach the goals.

Reducing NOx and SOx will in some cases cost increased CO2 emission.

With respect to CO2 the study shows that we still need to work with

technical solutions and operation to meet goal.

Further reduction in CO2 must be obtained through continued efforts to

reduce vessel resistance, optimised operation (slow steaming), more

effective propulsion systems, more fuel efficient engines, alternative

fuel (LNG, Biofuel etc.) and addition of alternative green means of

propulsion (fuel cells, wind, solar etc.) etc.

Further reductions in CO2 will also reduce NOx and SOx emissions.

Retrofit challenges.

The challenge and objective of “The Green Ship of the Future” initiative is to reduce CO2 emissions by around 30 per cent and nitric and sulphuric oxides by 90 per cent. This initiative is using both familiar and new technologies. Green Ship of the Future is primarily focusing on the large, two-stroke engines of the type that are used in large ocean-going container ships and tankers.

The project was launched in 2008 by MAN Diesel & Turbo in conjunction with the A.P. Møller-Mærsk Group Danish shipping firm, Odense Steel Shipyard and Aalborg Industries. The initiative’s primary objective is to highlight and develop new technologies aimed at achieving a significant reduction in marine emissions. The project now has some 15 partners, including shipping companies, their suppliers and several Danish universities.

In the summer of 2009, the initiative won the International Environmental Award from Sustainable Shipping for being the most environmentally friendly transport initiative. Sustainable Shipping is one of the leading organisations championing the sustainable use of our seas and oceans. Panel of judges member Dr. Simon Walmsley from the World Wide Fund For Nature  (WWF) said: “If we want to safeguard the survival of our planet, we need to change our behaviour. No branch of industry can afford to neglect these essential changes.”

Shipping is an extremely eco-friendly form of transport, but with the Green Ship of the Future initiative, we are making even greater efforts to protect the climate and the environment. Together with our partners, we want to help contribute towards the development of products that are even more eco-friendly and will reduce emissions further.

MAN Diesel & Turbo is heading or participating in the following sub-projects arising from the Green Ship of the Future initiative:

• Exhaust Gas Scrubbers

• Lower Ship Speeds within certifications

• Auto-tuning of MAN Diesel & Turbo engines

• Emission reduction using exhaust gas recirculation

• Waste heat recovery

Green Technology

Overview of green and cost-saving technology from Aalborg Industries.

As market leading manufacturer of highly efficient and environmentally friendly equipment for the maritime market such as marine boilers and heat exchangers, thermal fluid systems and inert gas systems, the Aalborg Industries Group develops new green solutions to support our customers in building and operating their commercial fleet to the highest standard for low environmental impact.

Waste Heat Recovery

New and more efficient exhaust gas Waste Heat Recovery systems utilizing the heat in the exhaust after diesel engines or gas turbines to further improve the total efficiency of the propulsion plant, thereby reducing fuel consumption.

M.E. Exhaust gas scrubbers

Exhaust gas scrubber system after diesel Main Engines significantly reducing the sulphur oxide (SOx) emission as well as emission of particles.

Economizer after aux. engines

For new installations or retrofit, an efficient exhaust gas economizer utilizing the heat in the exhaust gas from the auxiliary engines during port stays will significantly reduce the oil consumption for the oil-fired boiler.

Ballast water treatment

In a joint venture with Aquaworx, Germany, Aalborg Industries will develop ballast water treatment equipment meeting IMO regulations to prevent, minimize and ultimately eliminate the transfer of harmful aquaticorganisms and pathogens.

Superheater for aux. boilers

Installing a superheater on an auxiliary boiler will increase the efficiency of the cargo pump turbine substantially and reduce the fuel consumption and emissions during discharge operation on crude oil carriers.

MGO burner modification

Aalborg Industries is developing a solution to facilitate safe and easy switching between fuels from HFO to MGO or MDO and back as required in ports in Europe and USA. Firing with MGO in ports is required to limit emissions of sulphur oxides (SOx) as per IMO, US and EU regulations.

Cooling system for LNG

Aalborg Industries Inert Gas Systems has developed a new cooling system for LNG carriers using a mere 10% of the usual quantity of Freon (which is a known greenhouse gas) while also using the new, environmentally friendly Freon type.

Electrical Steam Generation

Connected to the auxiliary steam boiler, the VESTA™ EH-S heater is for certain ship types replacing or acting as a Donkey boiler and an alternative to conversion of boilers for MGO operation. The VESTA™ EH-S heater complies with European standards and is designed for easy approval by the classification societies.

Waste heat recovery economizer

after auxiliary engines

In the coming years, the marine industry and shipowners face big challenges as new environmental legislations have special focus on the reduction of emissions from fossil fuels. Therefore Aalborg Industries has developed an efficient exhaust gas economizer utilizing the heat in the exhaust gas from the auxiliary engines during port stays, which will significantly reduce the oil consumption for the oil-fired boiler.

For several decades, we have installed WHR systems after the ship’s main engines and these units are to a large extend able to meet the vessels steam requirement during seagoing operation and for some installations also able to assist with the generation of electrical power.

The waste heat from the auxiliary engines has not been considered in the past, but it actually contains a large energy amount which can be utilized to assist with the steam requirements mainly during port stays, but for some vessels also during seagoing operation.

The WHR concept has been specially developed as a customized solution with special focus on energy generation compared to return of investment and payback time can be reduced to 7 months for a complete WHR boiler system, accessories and installation onboard the ship. The normal payback time will be approximately 1 to 1½ year depending on the number of days, the produced steam can be utilized (offset against of the steam requirement from the oil fired boiler) and the redundancy requirements.

We offer a concept based on well-proven and innovative solutions to ensure the best operation conditions and optimal return of investment. The design of the heating surface of the WHR boiler is the result of an enhancement of our wellproven technologies with a small footprint and the lowest possible weight to output ration.

To ensure the most advantageous design, the WHR boiler concept will be specially tailored to the individual ship and engine design with due consideration of existing uptake back pressure etc. The concept comes in two designs;

One that requires a steam space in another boiler (e.g. in an existing auxiliary boiler) and

One that has its own steam space.

Able to supply or support the steam demand during port stay

Cost of steam production (energy) is nearly free

Financially sound investment with very short payback time

Adds a “green” profile to the ship

Lower emission tax when finally agreed

Less maintenance and lower operating costs for the oil-fired boiler

Exhaust Gas Scrubbers

Dimensions/weight are indicative figures only and subject to change.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com