The Response Surface Methodology Engineering Essay

 


In previous chapter, the working most important of milling machine, machining parameters which affect the surface roughness, chip thickness formation and factors influence surface roughness in milling machine has been discussed. This chapter gives the detailed overview of response surface methodology with its arithmetical background.


Response Surface Methodology (RSM) is a collection of statistical and geometric techniques useful for developing, improving, and optimizing processes [23].The most far-reaching applications of RSM are in the particular situations whas several input variables potentially influence some performance determine or quality characteristic of the process. Thus performance measure or superiority characteristic was called the response. The input variables are sometimes called self-determining variables, and they are subject to the control of the scientist or engineer. The field of response surface methodology consists of the investigational strategy for exploring the space of the process or independent variables, empirical geometric model to develop an appropriate approximating relationship between the yield and the process variables, and optimization methods for ruling the values of the process variables that produce desirable values of the response.


In this thesis, the concentration on the numerical modeling to develop an apposite approximating model between the response y and self-determining variables


In general, the relationship is


………………………………….………. (3.1)


where the form of the accurate response function f is unknown and perhaps very complicated, and e is a term that represent other source of variability not accounted for in f. Usually e includes possessions such as measurement slip-up on the response, background noise, the effect of other variables, and so on. Usually e is treated as a statistical error, often high and mighty it to have a normal


distribution with mean zero and variance s2 . Then


…… (3.2)


The variables in Equation (3.2) are usually called the accepted variables, because they are expressed in the natural units of measurement, such as degrees Celsius, pound per square inch, etc. In much RSM work it is convenient to make over the natural variables to coded variables, which are usually defined to be dimensionless with connote zero and the same standard deviation. In terms of the coded variables, the response function (3.2) will be written as


? = f ) ……………………………………………………(3.3)


Because the form of the true response function f is unknown, we must ballpark it. In fact, successful use of RSM is critically dependent upon the experimenter’s ability to extend a suitable rough calculation for f. usually, a low-order polynomial in some relatively small region of the independent variable space is apposite. In many cases, either a first-order or a second-order reproduction is used. The first-order model was likely to be appropriate when the experimenter is interested in approximating the true response surface over a comparatively small region of the independent variable space in a location where there is little curvature in f. For the glasses case of two independent variables, the first-order model in requisites of the coded variables is


? = ß0 + ß1x1 + ß2x2…………………………………………...…..….. (3.4)


The form of the first-order model in Equation (3.4) is every now and then called a main effects model, because it includes only the main effects of the two variables x1and x2 . If there is an interaction between these variables, it can be added to the model without problems as follows:


? = ß0 + ß1x1 + ß2 x2+ ß12 x1x2……………………………………….. (3.5)


This is the first-order model with communication. Adding the interaction term introduces curvature into the response function. Often the curvature in the accurate response surface is strong enough that the first-order model (even with the interaction term included) is laughable A second-order model will unlikely be required in these situations. For the case of two variables, the second-order model is


? = ß0 + ß1x1 + ß2 x2+ ß11 + ß22 + ß12 x1x2……………………… (3.6)


This model would likely be useful as an rough calculation to the true response surface in a relatively small region. The second-order model is widely second-hand in response surface methodology for several reasons:


The second-order model is very flexible. It can take on a wide variety of well-designed forms, so it will often work well as an rough calculation to the true response surface.


It is easy to educated guess the parameters (the ß’s) in the second-order model. The method of least squares can be used for this purpose.


There is considerable practical experience indicating that second-order models work in good health in solving real response surface problems.


In general, the first-order model is


? = ß0 + ß1x1 + ß2x2+…+ ßkxk…………………………………………(3.7)


and the second-order model is


? = ß0 + + + ………...………..(3.8)


In some frequent situations, approximating polynomials of order greater than two are used. The general motivation for a polynomial approximation for the correct response function f is based on the Taylor series development around the point x10, x20,……….xk0


Finally, let’s note that there is a close connection between RSM and linear regression analysis. For example, consider the model


? = ß0 + ß1x1 + ß2x2+…+ ßkxk +e………………………………….…(3.9)


The ß’s are a set of mysterious parameters. To estimate the values of these parameters, we must collect data on the system we are studying. Because, in wide-ranging, polynomial models are linear functions of the unknown ß’s, we pass on to the technique as linear regression analysis.


RSM is an important branch of experimental design. RSM is a critical equipment in developing new processes and optimizing their performance. The objectives of inferiority improvement, including reduction of variability and improved process and merchandise performance, can often be accomplished directly using RSM. It is well known that deviation in key performance distinctiveness can result in poor process and product quality. During the 1980s [2, 3] considerable attention has given to process superiority, and methodology was developed for using investigational design, specifically for the following:


For designing or developing products and process so that they are robust to component variation.


For minimizing variability in the output response of a product or a progression around a target value.


For designing products and processes so that they are full-bodied to environment conditions.


By robust means that the product or process performs consistently on target and is moderately insensitive to factors that are difficult to control. Professor Genichi Taguchi [24, 25] used the term robust parameter design (RPD) to describe his approach to this imperative problem. Essentially, strong parameter design methodology prefers to reduce process or product variation by choosing levels of controllable factors (or parameters) that make the arrangement insensitive (or robust) to changes in a set of out of control factors that represent most of the source of variability. Taguchi referred to these uncontrollable factor as noise factors. RSM assumes that these noise factors are disobedient in the field, but can be controlled during process development for purposes of a designed conduct test


Considerable attention have been focused on the methodology advocated by Taguchi, and a number of flaws in his approach have been exposed. However, the framework of response surface methodology allows without problems incorporate many useful concepts in his philosophy [23]. There are also two other full-length books on the area under discussion of RSM [26, 27]. In our technical report we are determined mostly on building and optimizing the empirical models and practically do not consider the problems of investigational design.


Most applications of RSM have sequential in nature. At first some ideas are generate with reference to which factors or variables are likely to be important in response surface study. It is usually called a screening experiment. The objective of aspect screening is to reduce the list of contender variables to a relatively few so that subsequent experiments will be more efficient and require fewer runs or tests. The purpose of this phase is the identification of the imperative self-regulating variables.


The experimenter’s objective is to determine if the current settings of the self-determining variables result in a value of the response that is near the optimum. If the modern settings or levels of the self-determining variables are not consistent with optimum performance, then the experimenter must conclude a set of adjustments to the process variables that will move the process toward the optimum. This phase of RSM makes significant use of the first-order model and an optimization technique called the method of steepest gradient (descent).


Phase 2 begins when the process was near the optimum. At this point the experimenter usually wants a model that will accurately approximate the accurate response function within a relatively small region approximately the optimum. Because the true response surface usually exhibit curvature near the optimum, a second-order model (or perhaps some higher-order polynomial) should be used. Once an appropriate reminiscent of model has been obtained, this model may be analyzed to determine the optimum conditions for the process. This in order experimental process is usually perform within some region of the independent variable breathing space called the operability region or experimentation region or region of concentration


Multiple linear regression (MLR) is a method second-hand to model the linear relationship between a dependent variable and one or more autonomous variables. The dependent changeable is sometimes also called the predicted, and the independent variables the predictors. MLR is based on least squares: the representation is fit such that the sum-of-squares of differences of experimental and predicted values is minimized. The relationship between a set of independent variables and the response y is strong-minded by a mathematical model called regression model. When there are additional than two independent variables the regression model is called multiple-regression model. In general, a multiple-regression model with q independent changeable takes the form of


Yi = ß0 + ß1xi1 + ß2xi2 + ……………. + ßqxiq + ei (i = 1, 2, ………, N)


Yi = ß0 + jxij + ei ( j= 1, 2,………,q)


Where n > q. The parameter ßi measures the probable change in response y per unit increase in xi when the other independent variables are held unvarying. The ith observation and jth level of independent variable is denoted by xij. The data organization for the multiple regression model is shown in Table 3.5.


Table 3.1: Data for Multiple-Regression Model


y x1 x2 ….. xq


y1 x11 x12 ….. x1q


y1 x11 x12 ….. x1q


yn xn1 xn2 ….. xnq


Box-Behnken designs has rotatable designs that also fit a full quadratic model but use just three levels of both factor. Design-Expert offers Box-Behnken designs for three to seven factor These designs have need of only two levels, three levels, coded as -1, 0, and +1. Box and Behnken created this design by combine two-level factorial designs with incomplete building block designs. This procedure creates designs with desirable arithmetical properties, but, most importantly, with only a fraction of the experiment needed for a jam-packed three-level factorial. These design offer limited blocking options, apart from for the three-factor version.


Box-Behnken designs necessitate that a lower number of actual experiments be performed, which facilitates probing into probable interactions between the parameters studied .Box- Behnken is composed of a spherical, gyrating design. It consists of a central point and the middle points of the edges of a cube hemmed in on a sphere. It contains three interlocking factorial designs and a central point. In the absent work, the two-level, three-factorial Box-Behnken experimental design is applied to consider process parameters


3.1.5 Analysis of Variance (ANOVA)


The purpose of the statistical analysis of variance (ANOVA) is to consider which design parameter significantly affects the Surface Roughness. Based on the ANOVA, the qualified magnitude of the machining parameters with respect to Surface Roughness is investigated to determine more accurately the best possible combination of the machining parameters.


Analysis of variance (ANOVA) uses the same intangible framework as linear regression. The main difference comes from the nature of the illuminating variables: instead of quantitative, here they are qualitative. In ANOVA, instructive variables are often called factors. If p is the number of factor, the ANOVA model is written as follows:


Yi=ßo + ………………………………………………… (3.1)


Where yi is the value experimental for the dependent variable for observation i, k(i,j) is the index of the grouping of factor j for observation i, and ei is the error of the model. The hypothesis used in ANOVA are identical to those second-hand in linear regression: the errors ei follow the same normal distribution N (0, s) and are independent.


The way the model with this suggestion added is written means that, within the framework of the linear regression model, the y is had the expression of random variables with stand for µi and variance s², where


+ k(I,j),j ……………………………………………...….(3.2)


To use the various tests proposed in the results of linear regression, it is not compulsory to check on second thoughts that the underlying hypotheses have been correctly verified. The normality of the residues can be tartan by analyzing certain charts or by using a normality test. The independence of the residues can be checked by analyze certain charts or by using the Durbin Watson assessment.


Interactions: By interaction was meant an artificial factor (not measured) which reflect the interaction between at least two calculated factors. To make a parallel with linear regression, the interactions are equivalent to the products between the permanent explanatory values although here obtaining interactions requires nothing more than simple multiplication between two variables. However, the information used to represent the interaction between factor A and factor B is A*B. The interactions to be used in the representation can be easily defined in DOE++ software.


Nested effects: When constraints thwart us from crossing every level of one factor with every level of the other factor, nested factor can be used. We say we have a nested effect when smaller number than all levels of one factor occur within each level of the other factor. An example of this might be if we want to study the effects of similar machines and different operators on some output characteristic, but we can't have the operators revolutionize the machines they run. In this case, each operator is not cross with each machine but rather only runs one machine. DOE++ software has an automatic device to find nested factors and one nested reason can be included in the model.


Balanced and unbalanced ANOVA: We talk of balanced ANOVA when for each one factor (and interaction if available) the number of observations within each category is the same. When this is not true, the ANOVA is said to be unbalanced. DOE++ software can handle mutually cases.


Random effects: Random factors can be included in an ANOVA. When some factor are supposed to be accidental, DOE++ software displays the expected mean squares table.


Constraints: During the calculations, each factor was broken down into a sub-matrix containing as many column as there had category in the factor. Typically, this is a full disjunctive table. Nevertheless, the stop working poses a problem: if there are g categories, the rank of this sub-matrix is not g but g-1. This leads to the prerequisite to delete one of the columns of the sub-matrix and possibly to make over the other columns. Several strategies are available depending on the elucidation we want to make afterwards:


a1=0: the parameter for the first grouping is null. This choice allows us force the effect of the first category as a ordinary In this case, the constant of the model is equal to the indicate of the dependent variable for group 1.


an=0: the parameter for the last category is null. This choice allows us force the effect of the last category as a ordinary. In this case, the constant of the model is equal to the mean of the dependent variable for group g.


Sum (ai) = 0: the sum of the parameters is null. This choice forces the unvarying of the model to be equal to the mean of the dependent variable when the ANOVA is balanced.


Sum (ni.ai) = 0: the sum of the parameters is unfounded This choice forces the constant of the model to be equal to the mean of the needy variable even when the ANOVA is unbalanced.


Note: even if the choice of constraint influences the values of the parameters, it have no effect on the predicted values and on the different fitting information.


Multiple Comparisons Tests: One of the main applications of ANOVA is multiple comparison testing whose aim is to confirm if the parameters for the various categories of a factor differ extensively or not. For example, in the case where four treatments are applied to plants, we want to know not only if the treatments have a significant effect, but also if the treatment have dissimilar effects. Numerous tests have been proposed for compare the means of categories. The majority of these tests assume that the sample is more often than not distributed. DOE++ software provides the main tests counting


Summary of the variables selection: Where a collection method has been chosen, DOE++ software displays the selection summing up. For a stepwise selection, the information corresponding to the similar steps are displayed. Where the best model for a number of variables varying from p to q has been selected, the unsurpassed model for each number or variables is display with the analogous statistics and the best model for the decisive factor chosen is displayed in bold.


Observations: The number of explanation used in the calculations. In the formulas shown below, n is the number of observations.


Sum of weights: The sum of the weights of the observations second-hand in the calculations. In the formula shown below, W is the sum of the weights.


DF: The number of degrees of freedom for the preferred model (corresponding to the error part).


R²: The strength of mind coefficient for the model. This coefficient, whose value is between 0 and 1, is only displayed if the constant of the representation has not been fixed by the user. Its value is defined by:


- , where =


The R² is interpret as the proportion of the variability of the dependent variable explained by the model. The nearer R² is to 1, the better is the model. The predicament with the R² is that it does not take into financial credit the number of variables used to fit the model.


Adjusted R²: The adjusted strength of mind coefficient for the model. The adjusted R² can be negative if the R² is near to zero. This coefficient is only calculated if the constant of the reproduction has not been fixed by the user. Its value is defined by:


The adjusted R² is a correction to the R² which takes into description the number of variables used in the model. The analysis of variance table is used to appraise the explanatory power of the explanatory variables. Where the irregular of the model is not set to a given value, the explanatory power is evaluated by compare the fit (as regards least squares) of the final model with the fit of the elementary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the assessment is made with respect to the model for which the dependent variable is equal to the unvarying which has been set.


The predictions and residuals table shows, for each observation, its weight, the value of the qualitative illustrative variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals, the self-confidence intervals together with the fitted prediction and Cook's D if the corresponding options have been activate in the dialog sachet. Two types of confidence interval are displayed: a confidence interval in the order of the mean (analogous to the case where the prediction would be made for an infinite number of observations with a set of given values for the illustrative variables) and an interval around the isolated prediction (analogous to the case of an isolated prediction for the values given for the explanatory variables). The second distance is always greater than the first, the random values being larger.


In this chapter, the detailed overview of response surface methodology is presented. The mathematical background of RSM is presented. The various types of RSM methods like, Box-Benken Design, Multiple regression and ANOVA model are described.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com