India Gaining Its Independence History Essay

 


Before India gained its independence, a semi-autonomous diplomatic relations was maintained by the Government of British India. They had colonies such as the Colony of Aden under the Bombay Presidency from 1937 to 1963, and consisted of the port city of Aden and its immediate surroundings. After India gained its independence from the United Kingdom in 1947, they joined the Commonwealth of Nations and also supported the independence movements in other colonies, like the Indonesian National Revolution. Overtime, India has gained many attentions in international affairs. The size, the population, and the location are the reason why India is so popular, not to mention the growing economic strength, their military ability, and the scientific and technical capacity to add on. During the Cold War, India did not align itself with any of the major powers, but they had connections with the Soviet Union therefore receiving many military support. After the Cold War, India was affected severely in terms of foreign policy, but still remained to lead the developing world and the Non-Aligned Movement. Then, India seeks to strengthen its diplomatic and economic ties with the United States, the People's Republic of China, the European Union, Japan, Israel, Mexico, and Brazil as well as becoming an active member of the South Asian Association for Regional Cooperation. Today, India looks forward to having a permanent seat on the UN Security Council because India has always been an active member of the UN and have constantly been participating in UN peacekeeping operations since the beginning but is they are currently backed by several countries including France, Russia, the United Kingdom, Germany, Japan, Brazil and Australia. Also, somewhere in the time frame of the last tree year, India has helped the US and European nations with significant information for war. They held many joint military exercises that resulted in a strengthened US-India and EU-India bilateral relationship where then India's bilateral trade with Europe and US has more than doubled in the last five years. India is also trying to work with nuclear and have already signed a nuclear co-operation agreement with the US, but this has not persuaded other Nuclear Suppliers Group members to sign similar deals with India even though the US argued that India has a strong nuclear non-proliferation record which would make it an exception pursue. If India achieve their goals, the benefit that will come is significant. They will be considered a superpower or better yet a potential of being a global power. What ever they do and decide can affect the world someway of another. They will continue to be the country with the fastest economic growth rates in the world with the largest economy. The cost of this is also significant; with every “good” there is a “bad”. India would have more enemies and could possibly go to war. Increasing in a very fast rate can also turn in to a quick descend.


India is very ritual and religious ever since the history of India. Religion became a part of living for Indians, a part of their culture. Starting from the Shramana religions and the Vedic religion, religion in India started thousands of years ago. India is the birth place of four of the world's major religious traditions which includes Hinduism which is a modified Vedic, Jainism and Buddhism from Sharmana, and lastly Sikhism. Hinduism dominates in India with 80.5 percent of the population, and then Islam with 13.4 percent, Christianity of 2.3 percent, and Sikhism of 1.9 percent are the other major religions followed by Indians. Religion is another reason why in the history of India people went. This included traders, travelers, immigrants, and even invaders.


Another country that is very religious in Asia is China. China’s religions also go way back in the past with Buddhism, Confucianism, and Taoism during the Tang Dynasty. Other than following a religion, Chinese mostly worship to their ancestors that have past away. After communism took over, Christianity started to merge, but then religion in China slowly faded away during the 50’s. During the 80’s when China opened up and freedom was then given to the people of China, Taoism and Buddhism was then considered to be Chinese culture. Buddhism became the fastest growing religion in China.


To me, the history of the two countries may not be exactly the same but I believed that religion in during the 50’s under communism did fade away but the people was forced not to express in what they believed in. That is the reason why after the “opening up” of China, religion re-emerged so quickly. India and China are similar in terms of religion because they both see religion as a part of their life, their culture.


In India, there are 1.17 billion people living in 3.29 million sq. km. A fairly large number of people belong in the upper and middle class, but at the same time, there is a big pool of people living under the line of poverty, about 70 percent to be more precise. As India keep growing and growing in terms of GDP at around 1.21 trillion USD per year, the large group of lower class people affects the country in several ways. In the history of India, a caste system is very much well known. From priests (Brahmin) to warriors (Kshatriya), then traders/artisans (Vaishya) and lastly farmers/laborers (Shudra), this way of dividing people is somewhat engraved into the society still today even though there are officially illegal. The way you are put into each category is pretty much simple, if you are born a farmer or a laborer, you will always be a Shudra. It is believed that if one is born unclean or polluted, they will always be polluted. This is obviously a problem because it is a problem in the society. As of now 70 percent of the people in India are Shudras, if this problem is not fixed, those numbers will only increase rather than decrease. By having lots of lower class people in the country, I believe that it degrades the country’s image. Other countries that see potential in India would very much put this as a negative view on India. The trust and the reliability on India will for sure decrease. To solve this social problem, the government already has put rules and laws against the caste system to improve the society, but that is not all it takes. The law enforcers are not doing a good enough job. They are not taking this law seriously enough. On the other hand, it’s not only the enforcers that need work; it’s the people of India too. Everyone needs to set aside the caste system and start to accept others, to open up and see what talent one has. This may take generations and generations, so it has to start from the fathers and mothers, teaching their children that there are no such things as a caste system. This routine will need to be practiced as generations pass to the next. This is the only way to cure this social problem. The faster they are able to decrease the numbers who believe in the caste system, the faster India will grow.


Dr. B.R. Ambedkar was an Indian boy who was born into an “untouchable” family in the caste system. His father sent him to the army because he would get good education. Right from his childhood, he experienced caste discrimination. He raised himself due to the death of his father and did well in school, later on graduating from Colombia University. Because of his rough childhood life, he has written countless emotional quotes such as this, “This condition obtains even where there is no slavery in the legal sense. It is found where as in caste system, some persons are forced to carry on the prescribed callings which are not their choice.” Knowing his background, it is easier to understand his feelings and emotions while writing this quote. I can feel that he understands a lot more about life than others surrounding him in India. He would speak with knowledge in the introduction of his quote. He understands why there is still a caste system even though he knows that everyone is born equally and especially when there is a law forbidding the caste system. He just wants people to realize that there is no point of it because he can prove it with himself. Being born as an untouchable is basically the lowest of the lowest, but he can prove it to other classes that untouchables can and are capable of being as good as others. The reason why he said “legal sense” is because he knows the law, he can debate, and to top that, he is a Law Minister himself. In the second part of the quote, he explains that it is unfair to be born as an untouchable. To elaborate, by being born as an untouchable, it’s like you are born with a scare on your face. The scare wasn’t there when you’re in the embryo, but as soon as you come out and you realize that you’re an untouchable, it’s like having to be forced to cut yourself in the face just to leave a scar just to remind you where you are from when it’s not even your choice. Now, no matter how or what he achieves in life, every time he looks in the mirror, he will always see that scar imbedded in his face. He is a brave man, doing what he did. He tried to make a difference for all untouchables in India. When I was reading about his life, it made me think of Martin Luther King. I believe that these two great men had similar goals in life, and that was to change the way people think about discrimination.


India plays a very big role in the Asia region in terms of trade, culture, and politics. To start off, India is very famous in trade. This is most likely because of where the land sits on earth and its richness in vast varieties of raw materials. Trade for India started since 2500 B.C.E., when the inhabitants of the Indus River valley developed an urban culture based on commerce and sustained by agricultural trade. Practicing of trading for India kept continuing continuously. In the 90’s, India won 213 parliamentary seats and returned to power at the head of a coalition with P.V. Narasimha Rao who led the Congress-led government which he initiated a gradual process of economic liberalization under Finance Minister Manmohan Singh. These reforms opened the Indian economy to global trade and investment. Today, India’s exports roughly 176.4 billion USD a year with engineering goods, petroleum products, precious stones, cotton apparel and fabrics, gems and jewelry, handicrafts, tea. Not only that, India’s software export is at 22 billion USD a year. Their major trade partners are China, U.A.E., EU, Russia, Japan, and of course the US.


India has a long history, therefore the culture in India has always changed throughout time. India has been invaded by countless such as the Iranian plateau, Afghanistan, Arabia, and the west. The Indians have sunk in many different cultures and languages, absorbed and modified to produce amazing racial and cultural combination. With a population of 1.17 billion, there are 72 percent of Indo-Aryan, 25 percent of Dravidian, and about 2000 more ethnic groups in the last 3 percent. The language both absorbed and originated includes Hindi, English, and 16 other official languages. As for the cultural aspects, we can see that India is vast with many different cultures merged from being invaded from its history.


In history, India started with the Indus Valley Civilization, then the verdict period which explains why Hinduism is the dominant religion in India. After the verdict period, many parts of India had many independent kingdoms and republics. Between the 10th century and the 12th, Delhi Sultanate ruled most the northern parts of India and later came the Mughal Empire. During the Mughal Empire, Akbar the Great had India making progress economically and culturally. Portugal, the Netherlands, France, and Great Britain were the countries that played a role around the 16th century. They established trading post and later took advantage of the Indians. By 1856, most of India was controlled by the British East India Company. After India got its independence on August 15, 1947, the current type of government in India is Federal republic. In the government, the executive branch includes the President who is the chief of state, then the Prime Minister who acts as the head of the government, and Council of Ministers.


Because of the historical significance of India in terms of trade, culture, and politics, the rest of the Asian region was also influenced. By Indian trading, the countries neighboring India such as Pakistan which is to the west, China, Nepal, and Bhutan to the north, and Bangladesh and Burma to the east also in some ways got some trading benefits. For instance China, China and India started to trade goods trough the Great Silk Routes. India also sent emissaries and Buddhist missions to China both by sea and by land. Culture, Indian culture spread throughout the Asian region. Cultures such as family culture where a family would all stay together at one place traveled to all of Asia. The oldest male would be in charge of everyone and everything; he would make all the important decisions for the family carried throughout Asia. Thailand has certainly also adopted this culture as the eldest in the family would take care of the young and would make the important decisions. Another historical significance of India that influenced Asia is politics. Even though most if not all South East Asian countries reject that they where influenced politically because they reject that they have been colonized by “The Greater India”, rulers of South East Asia countries use Sanskrit names, confirming that they have adopted it themselves which is highly unlikely. In my point of view, India has influenced the Asian region significantly through time. Some of the aspects that were influenced may be merged with original traditions, but I believe a lot of if started from the influence of India.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Politics And Economy Of Nineteenth Century Latin American History Essay

History » Politics And Economy Of Nineteenth Century Latin American History Essay

Latin America, so called today, was originally home to great civilizations such as the Aztec, Maya, and Inca. However, by the end of the sixteenth century, these civilizations were wiped out and most of Latin America was colonized by Europeans, particularly the Spanish and Portuguese who speak the Latin-languages. And a long time passed before this region saw light. Inspired by the American and French Revolutions, and due to the weakening of Spain and Portugal, Latin American nations began independence movements in the nineteenth century. Starting with Haiti in 1804, most nations gained independence by 1825. This was to bring immense hope to the people of Latin America. But instead, despair was brought and consequences of independence were severe. The economy and politics were tremendously instable and became even worse than the colonial period. Serious economic setbacks occurred and foreign intervention increased as outsiders looked to take advantage of the troubled region. Dictators emerged due to political instability and civil wars for power-control arose.

Independence resulted in free trade and access to the international capital market. These would be key factors in advancing Latin America. However, due to lack of experience in the world of trade and weakening of Latin American economic institutions from prolonged wars of independence, its potential growth was hindered. Furthermore, the region lost its main trading partners, former rulers Spain and Portugal, who provided them with much of the export income. The Spanish and Portuguese also directed and protected the economy of Latin America but there was no legitimate character within the continent to replace them. Trade among the newly independent nations decreased as well because of tariffs imposed on each other's imports. The Latin American nations had no choice but to ask for help from foreign nations, specifically Great Britain and the United States, because foreign investment and sale of exports was all the Latin American nations could rely on for national income. The foreign powers gratefully accepted as they wanted to establish Latin America as its new market in order to sell their products. However, the Latin Americans had more to lose than gain from this trade. Despite tariffs imposed on finished products from Great Britain and the United States, these imports were far cheaper than domestic products because the costs of producing domestic finished products were higher as Latin American nations lacked efficiency. Furthermore, the products Latin America exported was mainly raw material and each nation had only one or two types of these to export. What's even more troubling was that the production of these raw materials was also a difficult process due to lack of skills. Silver production decreased by fifty percent in Bolivia and seventy-five percent in Mexico compared to production before independence. Foreign investment was also no help since there was a limit to how much the United States and Great Britain could give to the numerous nations of Latin America.

The political situation in Latin America was far worse than the economic situation. Except for a few nations such as Chile and Uruguay, no nation had had a stable regime. This was mainly because the nations were new ('f 15). They were only beginning to gain their identities as a nation with new names, flags, and national anthems. Furthermore, there was no established border between these countries and there were ongoing battles between nations to gain more land. The political parties of most nations were divided into the conservatives, who wanted preservation of traditional social hierarchies to guarantee national stability, and the liberals, who wanted reform of economy and individual initiative to develop their nation. These parties struggled against each other for power and control in their nation, causing civil wars in some nations. Due to these conflicts, some states such as the Gran Colombia and the Federal Republic of Central America collapsed and divided into several different nations.

Political-military dictators known as caudillos emerged as a result of the economic and political crisis. These caudillos were formerly top class officers of armies that came into existence during independence wars. Evidently, they were deemed heroes by their people due to their feat in gaining independence. However, they wanted compensations for this deed and did not disband their armies in order to influence the course of political development as they were more stable and organized than other institutions. Eventually, leaders of these armies rose to the highest status within their nations. But this turned out to be catastrophic as they did not have enough knowledge about how to run a nation and did not care about the lives of their people. All they wanted was power and wealth. Thanks to these 'great' leaders, the first decades of the newly formed Latin American nations were marred by militarism and the nations experienced great setbacks despite gaining the freedom they yearned for so long. Even today, most of these nations have trouble overcoming the problems that existed for so long. Not a single one of these nations are considered developed and their future still remains cloudy.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Study Of Multimedia Data Compression Methods Engineering Essay

In this report, there are 2 experiment which are about ‘Audio Compression Using Down sampling and Quantization’ and ‘Image Compression Using JPEG’The aim of the experiment is understand how audio compression and image compression is done using different method and which is more effective. A few of the major findings are, for the audio compression, sampling and quantization determines the quality of the compressed audio.For image compression, DCT and quantization determines the compression ratio and image quality.

Keywords

INTRODUCTION

AUDIO COMPRESSION USING DOWN-SAMPLING AND QUANTIZATION

2.1 Experiment 1: Effect of sampling rate and quantization resolution on sound quality

For multimedia data, it has high redundancy with result in very large file size. A set of discreet sound samples compose an audio date. Quantization and representation by binary code is done to each sound sample obtain. In this experiment, the principles of sampling continuous time signal, increasing or decreasing the sampling rate of a discrete time signal and change the quantization resolution are being explored. Speeches and music audio are recorded with various sampling rates and bits per sample, sampling rate conversion and quantization on digital signals are applied. Each of the sound quality obtained by different filters and quantizers are compared and each of them are observed.

2.1.1 Experiment Procedure

Part 1) Investigate the effect of sampling rate and bits per sample on sound quality

1) Different sampling frequencies and bit rate of audio recording.

a. Windows Media Player on the computer is started.

b. The recording control is configured to change the source of sound input.

The volume control panel is opened. (double-click the “speaker” icon in system tray), select “options-

properties->recording”, then select “CD Audio, Microphone, Stereo Mix” to all these three items in

recording control panel, as shown in Fig. 1, and click “OK”.

In recording panel below(Fig. 2), “Stereo Mix” is select d so the internal sound of the computer can be recorded

The sound recorder is opened (accessory -> entertainment -> sound recorder). The recorder properties are adjusted to set the sampling rate be 11.025 KHz, 8 bits/sample by selecting “file->properties->convert now”( Fig. 4). “PCM” is chosen for “format”, “11.025 KHz, 8 bits, mono” is chosen for “attributes”, type “.wav” for “save as”.

The “record” button is push and 30 seconds of the played audio is record. A file is used to save it using the “.wav” format.

The same audio segment is recorded using different sample rates and bits/sample and the sound quality is compared. The following is how audio quality can be subjectively evaluated:

No

Perceptual Quality

Comment

1

Poor

The sound is corrupted by excessive noise and the sound is no longer understandable

2

Acceptable

The sound is corrupted by moderate noise level

3

Good

The sound quality is perceived to be similar to the original sound

the procedure above for the following recording parameter are repeated and the perceptual quality is commented.

Sampling rate = 11.025 KHz

Music

8 bits/sample

Acceptable

Music

16 bits/sample

Good

Speech

8 bits/sample

Acceptable

Speech

16 bits/sample

Good

Sampling rate = 44.1 KHz

Music

8 bits/sample

Acceptable

Music

16 bits/sample

Good

Speech

8 bits/sample

Acceptable

Speech

16 bits/sample

Good

Here, we can see that the more bits per sample , the better quality is obtain and with the higher sampling rate, the better quality of audio is sampled.

The two programs on a sound file you have recorded earlier can be apply using a sampling frequency of 44KHz sampling frequency, 16 bits/sample.

MATLAB program is started. Directory is changed to your own working directory. All the program files and sample audio files are copied to your directory.

The “down4up4_nofilt” program is used on a sound file recorded with sampling frequency of 44KHz. Type For example : down4up4_nofilt(‘myinput.wav’,’myoutput.wav’)on the command window of MATLAB.

The mean square error is computed by the program between the original and the interpolated signal. The original sound and the one after down-sampling and upsampling are compare the in terms of perceptual sound quality, waveform, frequency spectrum and mean square error. ? The result are tabulated and analyze the result

Repeat the steps from part b. but use “down4up4_filt” instead. The results are tabulated and analyzed.

Without filter

withfilter

MUSIC

Down-sampling/Up-sampling

(No filter used)

Down-sampling/Up-sampling (With filter)

Mean square error

0.000638157

0.000130789

Perceptual Quality

Acceptable

Good

Comment on waveforms

Discrete,The levels are very high

Discrete,The levels are lower

Comment on frequency spectrum

The spectrum is oscillating in a decreasing manner and rise up at the end

The spectrum is oscillating in a decreasing manner

With the fliter, the quality is better due to it reducing the noise effect, thus reducing the mean square error.

Two MATLAB programsare given, quant_uniform.m and quant_mulaw.m. an input sequence using a uniform quantizer with a user-specified number of levels are quantized by the “quant uniform” . The mu- law quantizer is the “quant_mulaw” program to quantize it.

“quant_uniform” function is applied to a sound file recorded with 16 bits/sample to quantize it to a lower quantization levels. As an example to use 256 quantization levels (8 bits per sample) use N=256. Type (example)

quant_uniform(‘myinput.wav’,’myoutput.wav’,256) on the command window of MATLAB.

The original sequence and quantized sequences are compared with different quantization levels ( in terms of perceptual sound quality and the waveform). The quantization error (mean square error between the original and quantized samples) are recorded and compared. determine the minimum ‘N’ to which has a good sound quality.

“quant_mulaw” function is applied to a sound file recorded with 16 bits/sample, with mu=16.

The experiment is repeated for both music and speech audio data. Different quantization levels N are used. determine the minimum N to obtain an acceptable sound quality. Tabulate the result.

N is quantization level

N = 2b

b = number of bits used per level

Uniform quantization

Mu-Law quantization

Mu = 16

Mean square error (mse)

Perceptual quality

poor/acceptable

/good

Mean square error (mse)

N= 212

= 4096

1.17327X108

Good

1.36375X109

N= 210

= 1024

1.8777 X107

Good

2.30712X108

N= 28

= 256

2.95153X106

Acceptable

3.66273X107

N= 26

= 64

5.27229X105

Poor

5.52074X106

N= 24

= 16

0.00112348

Poor

9.02207X105

Uniform quantization(min acceptable) = N= 28

Mu-Law quantization(min acceptable) = N= 26

N is quantization levels

N = 2b

b = number of bits used per level

Uniform quantization

Mu-Law quantization

Mu = 16

Mean square error (mse)

Perceptual quality

poor/acceptable

/good

Mean square error (mse)

N= 212

= 4096

1.15236X108

Good

3.15122X109

N= 210

= 1024

1.84788X107

Acceptable

5.02324X108

N= 28

= 256

2.92896X106

Poor

8.06364X107

N= 26

= 64

4.744 X105

Poor

1.28916X105

N= 24

= 16

0.000769637

Poor

0.000207797

Uniform quantization(min acceptable) = N= 210

Mu-Law quantization(min acceptable) = N= 28

N=16

N=64

N=256

N=1024

N=4096

Mul

N=16

N=64

N=256

N=1024

N=4096

SpeechUni

N=16

N=64

N=256

N=1024

N=4096

MulSpeech

N=16

N=64

N=256

N=1024

N=4096

Here , we can see that the high quantization level, N. the better the sound quality and the error is smaller.

However the Mu-Law quantization is better than the uniform quantization, as the it has smaller distance between two levels at the lower frequency as our ears are more sensitive to lower frequency.

Conclusion is, as seen the experiments above, the higher sampling rate and bits per sample and with filter and using Mu-law quantization give a batter sound quality.

Investigate the effectiveness of JPEG scheme for compressing photographic image

Represent compressed image in frequency domain using DCT transform

Investigate the importance of different DCT coefficients

Investigate the tradeoff in the selection and quantization of DCT coefficients and its effect on compression ratio and image quality.

Discrete Cosine Transform (DCT) is the center of most popular lossy image compression standard. the JPEG compression standard on the internet. how to transform an image into a series of 8 x 8 DCT coefficients blocks, how to quantize the DCT coefficients and then how to reconstruct the image based on the quantized DCT coefficients will experimented. The comparison can be perform between the original image and the decompressed image.

In the Image processing toolbox, the two dimensional discrete cosine transform (DCT) of an image is c computed with dct2 function .The DCT has the property, most of the visually significant information about the image is concentrated in just a few coefficients of the DCT for a typical image. Thus, the DCT is often used in image compression applications.

Ways to compute the DCT, the image processing toolbox offers two different. First is to use dct2 function which uses an FFT based algorithm for quick computation with large inputs. It may be more efficient to use the DCT transform matrix, which is returned by the function dctmtx for small square inputs, such as 8-by-8 or 16-by-16. The M-by-M transform matrix T is given by:

T*A is an M-by-M matrix whose columns which have one dimensional DCT of the columns of A. The two dimensional DCT of A can be computed as,B=T*A*T’, where T’ is the transpose of T. Its inverse is the same as its transpose since T is a real orthonormal matrix.Thus, T’*B*T is the inverse two dimensional DCT of B.

The input image is divided into 8-by-8 or 16-by-16 blocks, and the two dimensional DCT is computed for each block for JPEG image compression algorithm. DCT coefficients are quantized, coded, and transmitted. Then, the quantized DCT coefficients, compute the inverse two-dimensional DCT of each block, and then puts blocks back together into a single image decodes by JPEG receiver. Many of the DCT coefficients have values close to zero for typical images. The quality of the reconstructed image is maintain eventhough these coefficients can be discarded.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread(‘cameraman.tif’);

A = double(A)/255;

B = blkproc(A, [8 8], ‘P1*x*P2’, T, T’);

C = blkproc(B,[8 8], ‘P1.*x’,Mask);

D = blkproc(C, [8 8], ‘P1*x*P2’, T’, T);

imshow(A), figure, imshow(D);

The mini program above, Observe and understand

The grayscale image ‘cameraman.tif’ is compressed and decompressed according to the code above

The difference between the original image and the reconstructed image is observe. Are there any difference noticeable?

observe on the quality of the reconstructed image with several different quantization matrices. What if quantization matrix having all the elements in the is set to 1?

Can all of the DCT coefficients be removed except for the DC value without affecting the quality of the reconstruct images by too much? Discuss your answer.

The above experiments is repeated using ‘rice.tif’ image.

compression and decompression of the grayscale image ‘cameraman.tif’

cameraman

Original image

19

Reconstructed image

After compression, the reconstructed image does not preserve the quality in the original image as it look blurer. This is caused the part which is set to ‘0’ in the quantization matrix during compression.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread(‘cameraman.tif’);

A = double(A)/255;

B = blkproc(A, [8 8], ‘P1*x*P2’, T, T’);

C = blkproc(B,[8 8], ‘P1.*x’,Mask);

D = blkproc(C, [8 8], ‘P1*x*P2’, T’, T);

If all the elements in the quantization matrix is set to 1:

cameraman

Original image

cameraman

Reconstructed image

The Reconstructed image and the Original image are the same .This is caused by when all the parts in the quantization matrix are all set to ‘1’. Thus, no compression is done.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1];

A = imread('cameraman.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If remove all of the DCT coefficients except for the DC value

cameraman

Original image

20

Reconstructed image

We are unable to remove all the DCT coefficient except the DC value without affecting the quality of the reconstructed image. This is because when the DCT coefficient are removed, the high frequency are removed as well. Thus, information are lost. With the DCT gone,it is at a high compression ratio and high quality drop. With this, a very blur with blocking effect image is reconstructed.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('cameraman.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

compression and decompression of the grayscale image ‘rice.tif’

rice

Original image

Fullscreen capture 2172009 112125 AM

Reconstructed image

The reconstructed image is blurred and the quality is reduced. This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If all the elements in the quantization matrix is set to 1:

rice

Original image

rice

Reconstructed image

The Reconstructed image and the Original image are the same . This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1

1 1 1 1 1 1 1 1];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

If remove all of the DCT coefficients except for the DC value

rice

Original image

Reconstructed image

The reconstructed image is very blurred. . This result is the same as the result of ‘camera.tif’.

Matlab Code : image compression and decompression by using 8x8 DCT matrix.

T = dctmtx(8);

Mask=[ 1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('rice.tif');

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

imshow(uint8(A)), figure, imshow(uint8(D));

The color image ‘peppers.png’ is loaded into Matlab. They are first converted to the YCbCr color format, followed by sub-sampling of the chrominance (Cb & Cr) channels for the compression of color images. Use the function rgb2ycbcr and ycbcr2rgb to convert to YCbCr color space and vice versa. Use the function imresize to sample the chrominance channels. Use the same mask as in the code above for both the luminance and chrominance channels.

Without any sub-sampling ,perform the DCT and quantization on all the channels then reconstruct the image and the quality of the image is observed.

Using the 4:2:0 chroma sub-sampling,perform the DCT and quantization on every channel then reconstruct the image and the quality of the image is observed.

Any significant differences between the two reconstructed images are there above? Discuss your answer.

A simple function is written that takes an image and a mask as the input, perform compression and decompression on the image (color image compression if input image is color), and display the original and reconstructed images side by side. the SNR of the reconstructed image should be able to compute by the program.

1)DCT and quantization on all the channels without any sub-sampling.

peppers

Original image

23

Compress in RGB format

21

Compress in YCbCr format

Matlab Code : image compression and decompression

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('peppers.png');

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = YCbCr(:,:,2);

Cr = YCbCr(:,:,3);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,2) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,3) = blkproc(C, [8 8], 'P1*x*P2', T', T);

RBG = ycbcr2rgb(uint8(D));

figure, imshow(uint8(D)), figure, imshow(RBG);

2)DCT and quantization on every channel using the 4:2:0 chroma sub-sampling.

24

Compress in RGB format

25

Compress in YCbCr format

Matlab Code : image compression and decompression

T = dctmtx(8);

Mask=[ 1 1 1 1 0 0 0 0

1 1 1 0 0 0 0 0

1 1 0 0 0 0 0 0

1 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 ];

A = imread('peppers.png');

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = YCbCr(:,:,2);

Cr = YCbCr(:,:,3);

Cb = imresize(Cb,0.5);

Cr = imresize(Cr,0.5);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,2) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',Mask);

D(:,:,3) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

RBG = ycbcr2rgb(uint8(D));

figure, imshow(uint8(D)), figure, imshow(RBG);

Comparing the DCT and quantization images in RGB and YCbCr without sub-sampling with the DCT and quantization images with 4:2:0 chroma sub-sampling, there is no difference between them. This is because, the human eye is less sensitive towards Cb and Cr compare to Y, luminance. Even the Cb and Cr is reduced, there are no visible difference the human eye can detect.

3) simple function that takes an image and a mask as the input, perform compression and decompression on the image

function [disp,snr]=codec_snr(file_name,mask)

clear D

A = imread(file_name);

T = dctmtx(8);

if isgray(A)

A = double(A);

B = blkproc(A, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D = blkproc(C, [8 8], 'P1*x*P2', T', T);

disp = uint8(D);

elseif isrgb(A)

YCbCr = rgb2ycbcr(A);

YCbCr = double(YCbCr);

Y = YCbCr(:,:,1);

Cb = imresize(YCbCr(:,:,2),0.5);

Cr = imresize(YCbCr(:,:,3),0.5);

B = blkproc(Y, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,1) = blkproc(C, [8 8], 'P1*x*P2', T', T);

B = blkproc(Cb, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,2) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

B = blkproc(Cr, [8 8], 'P1*x*P2', T, T');

C = blkproc(B,[8 8], 'P1.*x',mask);

D(:,:,3) =imresize(blkproc(C, [8 8], 'P1*x*P2', T', T),2);

disp = ycbcr2rgb(uint8(D));

end

A = imread(file_name);

sx = double(A).^2;

sd = (double(A)-double(disp)).^2;

snr = 10*log10(mean(sx(:))/mean(sd(:)));

figure('Position',[8 8 1000 500]),subplot(1,2,1),imshow(A);

text(0,0,'Before','HorizontalAlignment','center', 'BackgroundColor',[1 1 1]);

subplot(1,2,2), imshow(disp);

text(0,0,'After','HorizontalAlignment','center', 'BackgroundColor',[1 1 1]);

Discussion:

1.In this experiment, we are finding out how does the image compression works using DCT and quantization and sub-sampling method and how all these affect the quality of the images.

2. JPEG is an effective scheme for compressing photographic image as it reduces information of the image while maintaining image quality. It takes out information that is less sensitive to the human eye thus reducing the size of the file image. By using the right quantization scheme, image quality is preserved.

3. Compression efficiency = (file size of compressed image)/ (file size of original image)

4. Compressed image is stored as quantized DCT coefficients are a suitable approach. The human eye is sensitive to see differences in brightness in large area, but less sensitive to the strength of high frequency brightness variation. The DCT and quantization is used reduce the higher frequency components while avoid losing image quality. 

5. Low frequency is important follow by medium and follow by high frequency. The low frequency is perceptually important thus, require fine quantization.

6. The effect of reducing the number of DCT coefficients on the compression ratio and the image quality is that, the compression ratio increased but the image quality is reduce if the compression ratio is very high. When the quantization matrix has many elements that reduce a high number of high frequency components to zero, the higher the compression ratio. Due to the high reduction of DCT coefficient, some of the information is lost which causes the image to have block effect and quality is reduced.

8. In this experiment, we learned that DCT and quantization can be used for image compression. Different types of quantization matrix can affect the image compression. Sub-sampling can increase the compression ratio. Certain information can be reduce during compression as our eyes are only sensitive to certain information. Certain frequency is more important than the others and required finer quantization.

In the audio experiment, we are able to observe how a sample sound can be recorded while maintaining it’s quality. We are able to understand how each method work and some of their advantages over another. Speech requirement of good recording is lower than music.

In the image compression experiment we are able to understand how different method of compression works and being applied.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Positive Aspects Of Economic Development History Essay

 


Nowadays, when economic develop, it brings so much good things for a country like demand of business will be increased, salary for employees will be higher, guarantee the life of people. However, in some developed countries such as Russia, Germany, France, USA, UK… besides developing economic, they still have a lot of problems about the terrorism, racism, pillage, conflict and destruction of property, and of course these problems are still increasing. In there, fascism was the most of power in the world although it failed in the past but its members are still trying to rebuild and create a new fascism in order to control the world again. They gather the youth groups together into gangs make many riots and Neo-Nazi is a new fascism.


Neo-Nazi first appeared around 1968-1969. They appear not only in Britain, but also in Poland, the Republic of Republic, Hungary, Croatia, Slovenia and Bulgaria.  And now in US, Neo-Nazi is the youth people, behave impolite, always think of bad things and prefer to destroy property. They usually gather together in the free time. Some of them are still in school and lives with their family, and some have jobs and lives alone or with their members. Neo-Nazi always appears with skinhead, tattoo which represents for Hitler and their target is oppose communism, liberalism, attacked by fascism in America, such as the new black people, Asian people, the homeless, the gay couples.  Now, Neo-Nazi appears in all most the world. Many people think that the new Neo-Nazi is a criminal organization with all the extreme things and its power has been increased much more than before. But there are some special cases that when they get older, some of them become good people like everyone.


On the other hand, in some developed countries have laws to prevent the fascist, racist and the increasing of the group of people who follow to Neo-Nazi. But it seems be impossible because these groups often attract people who blame society about their failed, or people who have thought deviations about the country, the cultural, or the immigrants with different cultures. According to Wikipedia about Neo-Nazism, Neo-Nazi movements are determined by loyalty to Adolf Hitler's.  This shows that with people who have no faiths and hopes in their life, when they join to Neo-Nazi, they will find themselves and can do anything they want without fearing result after. They believe that the political power is the most powerful, so that they try to destroy the national solidarity and culture in every country, make people believe in them and use those faiths to increase its power in order to control the world. Not yet, thoughts of many Neo-Nazi groups are increasingly closer together and now they are waiting for the right time to perform their plans.


According to Vietbao newspaper, at the World Cup 2006, Neo-Nazi has repeatedly caused the attacks foreigners, such as 10 people hurt in Germany, and 3 people from Mozambique and Cuba in Wismar. Not only attack innocent people but also destroy the property, they try to destroy the building, the museums which have a lot of tourists, they also make many counter-revolutionary uprising to oppose the government. And now, they are strongly developing in Russia and America. There are about 70,000 people and operate in 300 gangs at 85 cities.  Most of them are abnormal people, such as high school student with unstable psychology, people with no jobs, and people who like violence. They are often beating, and killing foreigners and other Asian countries. Special, many foreign students come and study at Russia had problems with Neo-Nazi, they had died or get accidents from the riot. For example: there was two Vietnamese students were killed by Neo-Nazi in 2004 and 2009. After that, Russia issued laws to stop those actions and many Neo-Nazi have been punished strictly, but they seem to no fearing, they become more aggressive and more brutal, and of course they kill more people every year.


Everybody said that those problems above appeared because of the lack of discipline or oldest person in a family had bad actions and others follow, or different religions and different cultures bring many different thinking therefore people can not condescending each other. So, in my opinion, in order to against the new fascism more effectively, I think promoting moral education, personality, history and culture both in school and in family is very needed because the mind of young people is very immature, so that it's very easy to guide them into the right way. Moreover, in a country need to have close and strict laws in order to control, prevent and punish timely all the terrorism, racism, pillage, conflict or destruction of property. And inside of heart of everybody in a country have to be close-knit and believe together. Be like this, all the bad and wrong thinking as well as the fascism can not trespass and the country will be peace and economic will develop more and more.


5The movie “American history X” is also an example about the racism and fascism. The film talks about Derek and Danny, they are two brothers and they lived in Los Angeles, California. From a child, Derek and Danny communicated thinking of racial discrimination by their family and social, so they really hated people who have black skin extremely. When their father was be killed by a group of black crime. Derek joined a Neo-Nazi organization and became the leader in there. Influenced by Cameron who was top leader in the Neo-Nazi organization, he not only hated the blacks, but also hated all immigrants from Asia, Latin America, Jews and Muslims…. when his brother informed to him that there was a group of black people was trying to steal Derek’s car, Derek used his gun and shot two black men in this group. So, he was sentenced to three years in prison.


During his time in prison, with thinking of racial discrimination, Derek joined the Aryan Brotherhood, it is a gang of white people and they had same his thinking. But, afterwards, he discovered the bad action of Aryan Brotherhood, he decided to leave them. However, it was not easy for him; he was beaten and raped by them. After this, he realized the nature of Neo-Nazi organization and all things that he did last time was wrong. Besides, he recognized honest friendship of black friend who worked with him and always shared feeling with him; He also understood the injustices and difficulties that black people have suffered when they lived in American.


After leaving in prison, Derek had changed his thinking about racial discrimination. So, he tried to convince Danny to leaving the group of neo-Nazi organization by telling what happened around him. Danny agreed. But, unfortunately, Danny was killed at his school by black man who fought with him last time.


The movie talk about racism that is the problem that all people are not easy to resolve or prevent because thinking of racial discrimination proceeds from teaching of parents, living environment, society…so, this effects strong to living style such as Derek and Danny, since a child, they had been communicated thinking of racial discrimination by their family and social, so they always lives in anger and violence, hence Derek killed two man who stole his car and joined a neo-Nazi organization which group always go around the city to destroy many stores of yellow and black human. And Danny, due to effect a lot of de Danny also joined a neo-Nazi organization and liked to fight with black human. They always believed that all their actions were right, black and yellow human were garbage, so they often fought. However, style of their thinking was wrong, they must pay high price for their action, Danny was killed by black boy and Derek was beaten and raped by Aryan Brotherhood.


The roles of gangs in the movie is a group that like to fight and browbeat people of color even if they can kill people of color in regret way. Or the action of Aryan Brotherhood, they often beat and fuck with any man who don’t listen and perform their demand. There were not the differences between the gangs in and out of prison because they have same attitude, they see people of color as garbage and they always use violence to resolve any their issues.


After watching the movie, my feeling is not good, I feel sorry for Danny, he was so young, nature of Danny is not bad, he joined the group of Neo-Nazi organization because he was be effected strong by teaching of his family and Derek too. If his parents have the right thinking, right views and communicate to them the right way about the nature of human, I think they will become a good person and useful than. Besides, the actions of destroying and fighting of gangs in and out of prison also were so wicked, they are unprincipled, they should be punish because of the laws. The movie teach me that we should love and respect people without they are people of color or black or white, because only the love and respected together can help our living in peace.


In the other hand, I disagree to think that white people have access to more power and privilege than people of color, because every people also have on the same level, and human right , so we should respect them as we protect one’s own interests. Or, if we don't respect people who have different skin, we think they don't have right human, they are just garbage, therefore they try to fight for themselves by gather people who have same thinking and use them to keep their position, and of course this will create racism and the war.


There are a lot of people who have thinking of racism because racism not only happen in America, but also happen in every country as Russia, Japan, Australia, and Italy..... However it depends conscious of each people and the way they treat together. In my opinion, I think we can not end racism in America because it is a big country which has a lot of people from different country and of course they will have different thinking depend on their own country, their culture, their religion, and their family. The only thing we can do is decrease the racism by showing to white and black people all the good things if they can live in peace with everyone. And if they can do that the world will be changing more, everything will be better and better.


Racism has been happening in many years from the past till now, not only in America, but also around the world. And Neo-Nazi is the young groups gather into gangs in order to destroy the property and human life. They usually attack people who always blame about the unfair in life or society. In sum, if we want to live in a peace world, we should start from now, try treat everyone in the fair way, try to prevent the bad thinking about the different skin color and different religion or culture, help bad people find the right way back and have to be honest and respect the rights of everyone. And of course, the world will be developed more and more without wars and crimes.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Seeded In Scaffolding Material Engineering Essay

 


Most of the conventional techniques used to create scaffold fabrication such as fiber bonding, solvent casting and melt moulding [2] yield out with random porous architectures which could not necessarily produce an appropriate homogeneous environment for bone information. Moreover non-uniform microenvironment can provide the region with inadequate nutrient concentration that will make cultured tissue grow in poor cellular activity prevent the formation of homogeneous quality new tissues.


In tissue engineering field, rapid prototyping is one of the most efficient techniques to design and create a highly porous artificial extracellular matrix (ECM) or scaffold that allows accommodating and guiding the proliferation of new cells. A scaffold is a polymeric porous structure made of biodegradable material such as poly-lactic-acid (PLA) and poly-glycolic-acid (PGA) [3]. To regenerate new tissues successfully, the whole process mainly relies on the structural formability of the tissue scaffold and bioreactors to provide appropriate environment for new cell feasibility and function. Rapid prototyping technique is capable to produce complex product quickly from the computer model based on the data of the patient CT. However RP techniques still have limitations and shortcomings such as its mechanical strength, interconnected channel and pores distribution to be resolved [1]. It still need to be improved in order to produce well-defined tissue engineered scaffolds with appropriate chemical and mechanical microenvironments. In this review, we will discuss further developments of RP techniques in tissue engineering based on its major aspects: methods and materials.


Rapid Prototyping Technologies [5]


Rapid prototyping is an advanced technology based typically based on development of computer technology and manufacturing. It is currently being used by investigators to produce scaffolds for use in tissue engineering. Rapid prototyping methods can be categorized into –liquid-based, solid-based and powder-based. In RP process, the 3D model is created layer by layer at a time based on the data defined by a computer-generated till the whole product is complete.


Main systems of RP technique mostly used in tissue engineering fields are


(1) Stereo lithography Apparatus (SLA)


(2) Selective Laser Sintering (SLS)


(3) Fused Deposition Modeling (FDM)


(4) Three-dimensional printing (3-DP)


The advantages and limitations of each of the rapid prototyping technology applied in TE can be summarized as described in table below.


Table. Advantages and limitations of SFF fabrication techniques [5]


Technique


Advantages


Limitations


SLA


-easy to remove support and trapped materials


-can get small features accurately.


-the development of photopolymerizeable and biocompatible, biodegradable liquid polymer material are limited


SLS


-can get good compressive strengths


-have greater material choice


-don’t need to use solvent


-processing temperatures is high


-difficult to remove difficult to remove trapped material in small inner features.


FDM


-no trapped material within small features


-don’t need to use solvent


-can get good compressive strengths


-support material is required for irregular structures


-have anisotropy between XY and Z directions


3D-P


-wider field for material choice


-heat effect is not high on raw material


- difficult to remove difficult to remove trapped material in small inner features.


-need to use toxic organic solvents


-mechanical strength is not good enough


According to the facts described above, it can be clearly seen that main limitations are the use of materials, toxic binders and poor feature symmetry [5].


Selective Laser Sintering Process (SLS) [6]


At first, CAD data files of the object in the .STL file format are transferred to the RP system where they are sliced into layer of equal thickness by mathematically. From this point SLS process start to operate as follow,


-A thin layer of heat-fusible is deposited onto the part-building chamber.


-The bottom-most cross-sectional slice of the CAD part to be fabricated is selectively scanned on the layer of powder by a carbon dioxide laser. The intersection of the laser beam with the powder elevates the temperature to the point of melting, fusing the powder particles to form a solid mass.


-New sintered layer of powder and previously formed layers are fused together to form the object.


Fig shows the process chain of SLS technique.


Fig.1. Schematic layout of the SLS process. [6]


Improvements of SLS process in order to create smaller features by using smaller laser spot size, powder size and thinner layer thickness are expected to produce the desired scaffolds for TE [1]. The degree of easy to remove trapped loose powder is also one of the criteria in current techniques. Existing solutions such as ultrasonic vibration, compressive air, bend blaster, and/or suitable solvent [1].


The bio-materials used by SLS system are non-biocompatible and bio-inert in nature. Because of that fact, SLS application in scaffold production is still limited. Moreover SLS in fabricating TE scaffolds often needs to use organic solvent in order to remove trapped materials [1] which can harm the inner organs when the structure is implanted in human body [6].


K.H. Tan et. al described the bio-compatible polymers such as Polyetheretherketone (PEEK), Poly(vinyl alcohol) (PVA), Polycaprolactone (PCL) and Poly(L-lactic acid) (PLLA) and a bioceramic namely, Hydroxyapatite (HA) to fabricate TE scaffolds [6]. By using these polymers the post-process doesn’t need to use any organic solvent in order to remove trapped material.


The properties and sources of these polymers are described below:


Molecular


Weight


(Mw)


Melting


Point


(Tm)


Glass-


Transitional


Temperature


(Tg)


Density


Avg.


Inherent


Viscosity


Particle


Size


Source


PCL


10,000


60.C


-60.C


Polyscience


Inc. (USA)


PLLA


172.C ~


186.8.C


60.5.C


2.53 dl/g


PURACAsia Pacific Pte.


Ltd [ ]


PEEK


343.C


143.C


25µm


Victrex PIC


Lancashire


UK


PVA


89,000~


98,000


220~


240.C


58~


85.C


100 µm


Aldri


Chemical


Company


HA


3.05 g/cm3


Below


60 µm


Coulter


Counter


Analysis


Among all these bio-materials,HA is highly compatible and can provide well bonding between tissue and the ceramic material[6]. In the process, the released ions of calcium and phosphate ions cause bone-induced osteogenesis and provide the linking of ceramic implant to the bone [6].


The experimental results of their optimized parameters in laser sintering process are described in the table below [6]:


Materials


Part bed Temperature


(.C)


Laser Power


(W)


Scan Speed


(mm/s)


PCL


40.C


2-3


3810


PLLA


60.C


12-15


1270


PVA


65.C


13-15


1270-1778


PEEK


140.C


16-21


5080


PEEK/HA


140.C


16


5080


K.H. Tan et. al reported that in sintering of PEEK/HA bio-composite blend, it is found by reducing the composition percentage of PEEK in the powder made the scaffold to be fragile and that fact made it is not practical to use in laser sintering. Their experiment result shows that the composition percentage of HA 40 wt% can provide the structure in good integrity. And furthermore this composition ratio should be kept at this value in order to get good result.


And from this research it can be clearly seen that (i) part bed temperature, (ii) laser power and (iii) scan speed are three main parameters to control the micro-porosity of the structure [6].


Another powder-based RP technique is 3D-printing (3DP). The bioresorbable polymers and copolymers (based on either polycaprolactone or polylactic and polyglicolic acids) [1, 7, 8] are used in this technique.3D system deposits a liquid binder by means of multiple injects onto a powder bed. The powder particles are glued each other in layer and down to nearly 0.08mm thickness. Depends on the different proposed solutions, biomaterials are incorporated in either the powder bed the liquid binder or post-process infiltrating agent. Well-defined porosity can be achieved by a careful selection of binder printing parameters or by mixing powder with salt, eventually melt in water. 3DP process has the advantage of being conducted at room temperature related to binder toxicity and mechanical strength of built parts.


The process chain of 3D-printing is as shown in Figure below.


Fig. 3-Dimensional Printing Schematic [10]


Organ printing[11]


Based on the concept of 3D-printing technique, one of the development of rapid prototyping technology in TE is organ printing.


Vladimir et al.[11] demonstrated in their report that organ printing is the bio medically relevant technique of RP technology which uses the behaviour of tissue fluidity. Its computer-assisted deposition material is cells, cells aggregates or matrix. The components used in organ printing is jet-based cell printers, cell dispensers or bio plotters, the different types of 3D hydro gels and varying cell types.


Fig- (a) CAD-based cell printer (b) Bovine aortic endothelial cells-printed in 50-micron size drops in a line (c) Cross-section of the p (NIPA-co-DMAEA) gel showing the thickness of sequentially placed layer (d) Real Cell Printer (e) The cell printer connected to a PtdCho via bidirectional parallel together with 9 jets extent of mixing. (f) Endothelial cell aggregate ‘printed’ on collagen before their fusion (g) After their fusion. This information is taken from [11]


The sequential process of organ printing includes (i) preprocessing (ii) processing and (iii) postprocessing[10]. Preprocessing is the development of computer aided design(CAD) or blue print of specific organ structure. The required 3D design can be achieved from digitised image construction of a natural organ or tissue. This imaging data is derived from various modalities such as noninvasive scanning of the human body (e.g. MRI or CT) or a detailed 3D reconstruction of serial section of specific organ. In processing, CAD design of specific organ structure is printed as layer-by-layer by jet-based cell printer. Postprocessing is the process of perfusion of printed organ and its biomechanical conditioning to both direct and accelerated organ maturation.


Fig- Schematic representation of cell printing, assembly and their perfusion of 3D printed vascular unit. Red-Endothelial cells aggregates, Blue-Smooth muscle cell aggregate [11]


Basically organ printing is an advantage of the fusion phenomena of embryonic cell or tissue which are viscoelastic fluids can flow and fuse [11].


Vladimir et al.[11] described that based on achievements: development of a printer which can print cells and/or aggregates; demonstration of procedure for layer-by-layer sequential deposition and solidification of a thermo-reversible gel or matrix and fusion property of embryonic cell, cell aggregate or tissue that if they are put closely fused into ring-like od tube-like structure within the gel; all these achievements and advantage pointed out that organ printing is a feasible advanced technique for TE.


In application of rapid prototyping technique to TE, vascular density of desired organ is one of the most crucial factor for adequate organ perfusion and supply of oxygen and functioning [10]. The tissue-engineered organ could not survive and develop if they don’t have adequate vascularisation.


The authors had recommended that the advantage of organ printing technique is a unique opportunity to eventually print a sophisticated branching vascular tree during the whole process of printing the specific organ.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

Subsea Completions And Workover Subsea Trees Engineering Essay

This report will basically centre on how the followings - Subsea Completions and Workover, Subsea Trees and Subsea Processing are applied to maximize oil production in the Gulf of Mexico.

Well completion involves the installation of a production conduit into which has been incorporated various components to allow efficient production, pressure integrity testing etc [2].

Workover -The recompletion of the well to restore production or change the well function [2] or the process of replacement and maintenance operations on the tools in an oil or gas well.

A subsea tree, also called a “wet tree, is an assembly of control valves, gauges and chokes that control oil and gas flow in a completed well” [2]. The tree also enables methanol and chemical injection, pressure and temperature monitoring and allows vertical access for intervention [2].

The removal of unwanted constituents and recovery wanted constituents is called process of hydrocarbon under a condition of pressure and temperature. Subsea processing is the processing of hydrocarbon fluids on the seabed. Processes of these subsea processing involved include water re-injection, multiphase boosting, phase separation gas compression. Not all processes are done offshore; some are still designed for onshore processing.

The Gulf of Mexico region is the arm of the Atlantic Ocean and is bound on the northeast, north and northwest by the Gulf coast of the United States, on the southwest and south by Mexico, and on the southeast by Cuba [3]. In this region, completions and workovers, subsea trees and subsea processing have been designed to do a particular task. The Gulf of Mexico is richly able with hydrocarbon deposits in deepwater. Below is a picture showing the Gulf of Mexico and the countries around the region.

The Gulf of Mexico [4]

Subsea Well Completion involves all the work done on the well prior to production and the installation of subsurface equipment e.g. tubing hanger, blow out preventer (BOP), etc in order to successfully produce from the well. Completion consists of the lower and upper completion processes. The upper completion involves installation of all the various components from the base of the production tubing right to the top, while lower completion takes place around the production area. Some categories of lower completions are:

2.1 Barefoot Completion: This type of completion is suitable for hard rock, multilerals and under balance drilling. Not suitable for weaker formations requiring sand control and wells that require selective isolation of oil, gas and water intervals [5].

Barefoot completion [8]

2.2 Cased Hole Completion: The portion of the wellbore that has had metal casing placed and cemented to protect the open hole from fluids, pressures, wellbore stability problems or a combination of these. This is also the process whereby a casing is run down through the production zone and cemented in place. This type of completion encourages good control of fluid flow [5].

Cased hole completion [7]

2.3 Open Hole Completion: This type of completion is more advantageous in horizontal wells because the technical hitches and the high cost of cemented liners is associated with horizontal wells [5].

The simplest types of oil well or gas well completion, open hole completions have several limitations and disadvantages. Consequently, they are typically limited to special completions in formations capable of withstanding production conditions [6].

Open hole completion [6]

Perforating Guns: This type of component is used to create predefined pattern of perforation in the sides into the reservoir by means of explosive charges, to allow the flow of oil into the well [9]. An example is shown below.

Perforating gun [9]

2 Wellhead: This is the main component that houses the valves that controls fluid from the well to the manifold. It also acts as an interface between the production facility and the reservoir.

Wellhead [10]

Tubing Hanger: This component is located on the top of the manifold provides support for the production tubing. See picture below.

Tubing Hanger [11]

Production Packer: “This is a standard component of the completion hardware of oil and gas well and it is a seal between the tubing and the casing. It is used to isolate one part of the annulus from another for various reasons”. This is done to separate different sections like the gas lifts section from the production section. It is also used in injection wells to isolate the zones. [12].

Production paker [2]

Production tubing: This is the basic channel through which hydrocarbon flows from the reservoir to the surface. The diagram is seen below.

Production Tubing [13]

Downhole Safety Valve: This is used to protect the surface from the uncontrolled release of hydrocarbons. It is a cylindrical valve with either a ball or flapper closing mechanism; it is installed in the production tubing and is held in the open position by hydraulic pressure from surface [5]. See the diagram below.

Downhole Safety Valve [14]

6 Annular Safety Valve: This is needed to isolate the production tubing in order to

prevent the inventory of natural gas downhole from becoming a hazard. See the diagram below.

Annular Safety Valve [15]

Landing Nipples: This is a receptacle to receive wireline tools. It is also a useful marker for depths in the well, which can be difficult to accurately determine as you can see in the diagram below [4].

Landing Nipples [16]

Downhole Guages: This is an electronic or fibre optic sensor to provide continuous monitoring of downhole pressure and temperature. Gauges use a 1/4" control line clamped onto the outside of the tubing string to provide an electrical or fibre optic communication to surface as shown in the diagram below.

Downhole Guage [17]

Wireline Entry Guide: This component is often installed at the end of the tubing (the shoe). It is intended to make pulling out wireline tools easier by offering a guiding surface for the tool string to re-enter the tubing without getting caught on the side of the shoe.The diagram is shown below [5]

Wireline entry guide [18]

Centralizer: In highly deviated wells, this component may be included towards the foot of the completion. It consists of a large collar, which keeps the completion string centralised within the hole [5].

Centralizer [19]

Mensa field is an example of completions in the Gulf of Mexico. It consists of three wells and gathers gas into a manifold and transports it to West Delta 143 platform 68 miles. See the diagrams below [20].

Subsea development [20] Subsea Production manifold [20]

Well Performance Sensitivities [2]

“Reduced production, scale, tubing and components leaks, artificial lift failures e.g. ESP failure, water shut off and re-perforation, change of well function e.g. producer to injector are some events needed for workover operation on a well” [2]. “A brief summary of the completed workovers in the Gulf of Mexico are:

A-10: Cleared debris and zone was re-perforated. Initial production 140bopd with 10/64 chokes. Well continues to produce at a rate of 140 bopd.

A-2: Cleared debris and oil flowed to the surface followed by emulsions. Currently, the well is being analyzed to determine the appropriate solution needed to liquefy the emulsions so that the well can flow without interruption.

A-16: Cleared debris and re-perforated. Well did not produce from existing zone. Currently under analysis to determine if other zones can be considered as candidates for perforation” [21].

This can be classified into three types based on tree Configuration, tree functionality and

tree Installation.

Schematic of the subsea tree [22]

Horizontal Trees

The following below are the features of a horizontal tree

– “The valves are set off to the side.

– Well intervention can be done through them.

– No valves in the vertical bore

– Tree run before the Tubing Hanger

– Tubing Hanger orients from Tree (Passive)

– Internal Tree Cap installed

– Tubing Hanger seals are exposed to well fluids” [23]

Horizontal Tree [24]

Conventional Dual Bore (Vertical) Trees

Below are the features of a dual bore tree:

– “Master & Swab valves in vertical bore

– Tree run after Tubing Hanger

– Tubing Hanger orients from Wellhead or BOP pin (Active)

– External Tree Cap installed

– Tubing hanger seals isolated from well fluids” [23]

Conventional Dual Bore Tree [24]

A third type is the Mudline tree. These are usually used for shallow water applications and typically installed from jack-up rigs. They have minimal hydraulic functions [24].

Trees generally can either be used on production wells or on injection wells. Thus we have

Production Trees

Injection Trees

Trees can be installed either with Guidelines or Without Guidelines.

Examples of installed subsea trees in the Gulf of Mexico are:

This was used at the Shell-operated Silver tip field, part of the Perdido Development located to set a current subsea deepwater completions record of 9,356ft [25].

Enhanced Deepwater Subsea Tree [26]

This was the world’s first 15,000psig subsea tree. The tree was adapted by Cameron from an existing mono-bore mudline tree, with modified components from its 10,000psig tree design [27].

Gyrfalcon Subsea Tree [27] During Installation [27]

This was to be supplied by FMC in the Blind Faith Development which is located in approximately 7,000ft of water [28].

15k Enhanced Horizontal Tree [28]

Troika oil field, located 150 miles offshore Louisiana in Green Canyon 244 unit and lies in water depth of 2700ft made use of conventional, non-TFL, 10,000psi dual bore 4in×2in configuration, installed using guidelines [29]

“Deployment of subsea processing systems has seen a marked acceleration in the past couple of years, with various separation and boosting systems being ordered for deployment in the North Sea , the Gulf of Mexico, West Africa, South America and Australia” [30]. A driving factor for this is cost. Cost reduction is obvious when large and expensive topside facilities are eliminated for the subsea ones. Other drivers include “flow management and flow assurance, accelerated and or increased recovery, development of challenging subsea fields” [31]. Deployments in the Gulf of Mexico include:

Submerged Production System [32]

“The start-up of Aker’s MultiBooster pump technology at a water depth of 5,500ft below surface is expected to boost BP’s production at the King Field by an average of 20%. The MultiBooster system is a subsea multiphase pump module, combining field-proven twin screw technology with Aker’s suite of processing and subsea technology” [33].

Aker Kvaerner’s MultiBooster [33]

Also in the Perdido development, FMC’s scope of work included the supply of subsea caisson separation and boosting system [34]. The gas/liquid caisson separators with ESPs where used because of the fields low reservoir pressure and heavy oil [31].

Gas/Liquid Caisson separator at 2500m/8200ft water depth

for the Perdido Project [31]

Subsea technology and development in the Gulf of Mexico has improved for ages, this is a result of bringing new innovation to move the industry forward and optimize the abundant natural resources beneath the deep water and different technologies involved. This also makes production activities and exploration and in this region to be more fruitful for both operators as well as the marketers.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com

The Response Surface Methodology Engineering Essay

 


In previous chapter, the working most important of milling machine, machining parameters which affect the surface roughness, chip thickness formation and factors influence surface roughness in milling machine has been discussed. This chapter gives the detailed overview of response surface methodology with its arithmetical background.


Response Surface Methodology (RSM) is a collection of statistical and geometric techniques useful for developing, improving, and optimizing processes [23].The most far-reaching applications of RSM are in the particular situations whas several input variables potentially influence some performance determine or quality characteristic of the process. Thus performance measure or superiority characteristic was called the response. The input variables are sometimes called self-determining variables, and they are subject to the control of the scientist or engineer. The field of response surface methodology consists of the investigational strategy for exploring the space of the process or independent variables, empirical geometric model to develop an appropriate approximating relationship between the yield and the process variables, and optimization methods for ruling the values of the process variables that produce desirable values of the response.


In this thesis, the concentration on the numerical modeling to develop an apposite approximating model between the response y and self-determining variables


In general, the relationship is


………………………………….………. (3.1)


where the form of the accurate response function f is unknown and perhaps very complicated, and e is a term that represent other source of variability not accounted for in f. Usually e includes possessions such as measurement slip-up on the response, background noise, the effect of other variables, and so on. Usually e is treated as a statistical error, often high and mighty it to have a normal


distribution with mean zero and variance s2 . Then


…… (3.2)


The variables in Equation (3.2) are usually called the accepted variables, because they are expressed in the natural units of measurement, such as degrees Celsius, pound per square inch, etc. In much RSM work it is convenient to make over the natural variables to coded variables, which are usually defined to be dimensionless with connote zero and the same standard deviation. In terms of the coded variables, the response function (3.2) will be written as


? = f ) ……………………………………………………(3.3)


Because the form of the true response function f is unknown, we must ballpark it. In fact, successful use of RSM is critically dependent upon the experimenter’s ability to extend a suitable rough calculation for f. usually, a low-order polynomial in some relatively small region of the independent variable space is apposite. In many cases, either a first-order or a second-order reproduction is used. The first-order model was likely to be appropriate when the experimenter is interested in approximating the true response surface over a comparatively small region of the independent variable space in a location where there is little curvature in f. For the glasses case of two independent variables, the first-order model in requisites of the coded variables is


? = ß0 + ß1x1 + ß2x2…………………………………………...…..….. (3.4)


The form of the first-order model in Equation (3.4) is every now and then called a main effects model, because it includes only the main effects of the two variables x1and x2 . If there is an interaction between these variables, it can be added to the model without problems as follows:


? = ß0 + ß1x1 + ß2 x2+ ß12 x1x2……………………………………….. (3.5)


This is the first-order model with communication. Adding the interaction term introduces curvature into the response function. Often the curvature in the accurate response surface is strong enough that the first-order model (even with the interaction term included) is laughable A second-order model will unlikely be required in these situations. For the case of two variables, the second-order model is


? = ß0 + ß1x1 + ß2 x2+ ß11 + ß22 + ß12 x1x2……………………… (3.6)


This model would likely be useful as an rough calculation to the true response surface in a relatively small region. The second-order model is widely second-hand in response surface methodology for several reasons:


The second-order model is very flexible. It can take on a wide variety of well-designed forms, so it will often work well as an rough calculation to the true response surface.


It is easy to educated guess the parameters (the ß’s) in the second-order model. The method of least squares can be used for this purpose.


There is considerable practical experience indicating that second-order models work in good health in solving real response surface problems.


In general, the first-order model is


? = ß0 + ß1x1 + ß2x2+…+ ßkxk…………………………………………(3.7)


and the second-order model is


? = ß0 + + + ………...………..(3.8)


In some frequent situations, approximating polynomials of order greater than two are used. The general motivation for a polynomial approximation for the correct response function f is based on the Taylor series development around the point x10, x20,……….xk0


Finally, let’s note that there is a close connection between RSM and linear regression analysis. For example, consider the model


? = ß0 + ß1x1 + ß2x2+…+ ßkxk +e………………………………….…(3.9)


The ß’s are a set of mysterious parameters. To estimate the values of these parameters, we must collect data on the system we are studying. Because, in wide-ranging, polynomial models are linear functions of the unknown ß’s, we pass on to the technique as linear regression analysis.


RSM is an important branch of experimental design. RSM is a critical equipment in developing new processes and optimizing their performance. The objectives of inferiority improvement, including reduction of variability and improved process and merchandise performance, can often be accomplished directly using RSM. It is well known that deviation in key performance distinctiveness can result in poor process and product quality. During the 1980s [2, 3] considerable attention has given to process superiority, and methodology was developed for using investigational design, specifically for the following:


For designing or developing products and process so that they are robust to component variation.


For minimizing variability in the output response of a product or a progression around a target value.


For designing products and processes so that they are full-bodied to environment conditions.


By robust means that the product or process performs consistently on target and is moderately insensitive to factors that are difficult to control. Professor Genichi Taguchi [24, 25] used the term robust parameter design (RPD) to describe his approach to this imperative problem. Essentially, strong parameter design methodology prefers to reduce process or product variation by choosing levels of controllable factors (or parameters) that make the arrangement insensitive (or robust) to changes in a set of out of control factors that represent most of the source of variability. Taguchi referred to these uncontrollable factor as noise factors. RSM assumes that these noise factors are disobedient in the field, but can be controlled during process development for purposes of a designed conduct test


Considerable attention have been focused on the methodology advocated by Taguchi, and a number of flaws in his approach have been exposed. However, the framework of response surface methodology allows without problems incorporate many useful concepts in his philosophy [23]. There are also two other full-length books on the area under discussion of RSM [26, 27]. In our technical report we are determined mostly on building and optimizing the empirical models and practically do not consider the problems of investigational design.


Most applications of RSM have sequential in nature. At first some ideas are generate with reference to which factors or variables are likely to be important in response surface study. It is usually called a screening experiment. The objective of aspect screening is to reduce the list of contender variables to a relatively few so that subsequent experiments will be more efficient and require fewer runs or tests. The purpose of this phase is the identification of the imperative self-regulating variables.


The experimenter’s objective is to determine if the current settings of the self-determining variables result in a value of the response that is near the optimum. If the modern settings or levels of the self-determining variables are not consistent with optimum performance, then the experimenter must conclude a set of adjustments to the process variables that will move the process toward the optimum. This phase of RSM makes significant use of the first-order model and an optimization technique called the method of steepest gradient (descent).


Phase 2 begins when the process was near the optimum. At this point the experimenter usually wants a model that will accurately approximate the accurate response function within a relatively small region approximately the optimum. Because the true response surface usually exhibit curvature near the optimum, a second-order model (or perhaps some higher-order polynomial) should be used. Once an appropriate reminiscent of model has been obtained, this model may be analyzed to determine the optimum conditions for the process. This in order experimental process is usually perform within some region of the independent variable breathing space called the operability region or experimentation region or region of concentration


Multiple linear regression (MLR) is a method second-hand to model the linear relationship between a dependent variable and one or more autonomous variables. The dependent changeable is sometimes also called the predicted, and the independent variables the predictors. MLR is based on least squares: the representation is fit such that the sum-of-squares of differences of experimental and predicted values is minimized. The relationship between a set of independent variables and the response y is strong-minded by a mathematical model called regression model. When there are additional than two independent variables the regression model is called multiple-regression model. In general, a multiple-regression model with q independent changeable takes the form of


Yi = ß0 + ß1xi1 + ß2xi2 + ……………. + ßqxiq + ei (i = 1, 2, ………, N)


Yi = ß0 + jxij + ei ( j= 1, 2,………,q)


Where n > q. The parameter ßi measures the probable change in response y per unit increase in xi when the other independent variables are held unvarying. The ith observation and jth level of independent variable is denoted by xij. The data organization for the multiple regression model is shown in Table 3.5.


Table 3.1: Data for Multiple-Regression Model


y x1 x2 ….. xq


y1 x11 x12 ….. x1q


y1 x11 x12 ….. x1q


yn xn1 xn2 ….. xnq


Box-Behnken designs has rotatable designs that also fit a full quadratic model but use just three levels of both factor. Design-Expert offers Box-Behnken designs for three to seven factor These designs have need of only two levels, three levels, coded as -1, 0, and +1. Box and Behnken created this design by combine two-level factorial designs with incomplete building block designs. This procedure creates designs with desirable arithmetical properties, but, most importantly, with only a fraction of the experiment needed for a jam-packed three-level factorial. These design offer limited blocking options, apart from for the three-factor version.


Box-Behnken designs necessitate that a lower number of actual experiments be performed, which facilitates probing into probable interactions between the parameters studied .Box- Behnken is composed of a spherical, gyrating design. It consists of a central point and the middle points of the edges of a cube hemmed in on a sphere. It contains three interlocking factorial designs and a central point. In the absent work, the two-level, three-factorial Box-Behnken experimental design is applied to consider process parameters


3.1.5 Analysis of Variance (ANOVA)


The purpose of the statistical analysis of variance (ANOVA) is to consider which design parameter significantly affects the Surface Roughness. Based on the ANOVA, the qualified magnitude of the machining parameters with respect to Surface Roughness is investigated to determine more accurately the best possible combination of the machining parameters.


Analysis of variance (ANOVA) uses the same intangible framework as linear regression. The main difference comes from the nature of the illuminating variables: instead of quantitative, here they are qualitative. In ANOVA, instructive variables are often called factors. If p is the number of factor, the ANOVA model is written as follows:


Yi=ßo + ………………………………………………… (3.1)


Where yi is the value experimental for the dependent variable for observation i, k(i,j) is the index of the grouping of factor j for observation i, and ei is the error of the model. The hypothesis used in ANOVA are identical to those second-hand in linear regression: the errors ei follow the same normal distribution N (0, s) and are independent.


The way the model with this suggestion added is written means that, within the framework of the linear regression model, the y is had the expression of random variables with stand for µi and variance s², where


+ k(I,j),j ……………………………………………...….(3.2)


To use the various tests proposed in the results of linear regression, it is not compulsory to check on second thoughts that the underlying hypotheses have been correctly verified. The normality of the residues can be tartan by analyzing certain charts or by using a normality test. The independence of the residues can be checked by analyze certain charts or by using the Durbin Watson assessment.


Interactions: By interaction was meant an artificial factor (not measured) which reflect the interaction between at least two calculated factors. To make a parallel with linear regression, the interactions are equivalent to the products between the permanent explanatory values although here obtaining interactions requires nothing more than simple multiplication between two variables. However, the information used to represent the interaction between factor A and factor B is A*B. The interactions to be used in the representation can be easily defined in DOE++ software.


Nested effects: When constraints thwart us from crossing every level of one factor with every level of the other factor, nested factor can be used. We say we have a nested effect when smaller number than all levels of one factor occur within each level of the other factor. An example of this might be if we want to study the effects of similar machines and different operators on some output characteristic, but we can't have the operators revolutionize the machines they run. In this case, each operator is not cross with each machine but rather only runs one machine. DOE++ software has an automatic device to find nested factors and one nested reason can be included in the model.


Balanced and unbalanced ANOVA: We talk of balanced ANOVA when for each one factor (and interaction if available) the number of observations within each category is the same. When this is not true, the ANOVA is said to be unbalanced. DOE++ software can handle mutually cases.


Random effects: Random factors can be included in an ANOVA. When some factor are supposed to be accidental, DOE++ software displays the expected mean squares table.


Constraints: During the calculations, each factor was broken down into a sub-matrix containing as many column as there had category in the factor. Typically, this is a full disjunctive table. Nevertheless, the stop working poses a problem: if there are g categories, the rank of this sub-matrix is not g but g-1. This leads to the prerequisite to delete one of the columns of the sub-matrix and possibly to make over the other columns. Several strategies are available depending on the elucidation we want to make afterwards:


a1=0: the parameter for the first grouping is null. This choice allows us force the effect of the first category as a ordinary In this case, the constant of the model is equal to the indicate of the dependent variable for group 1.


an=0: the parameter for the last category is null. This choice allows us force the effect of the last category as a ordinary. In this case, the constant of the model is equal to the mean of the dependent variable for group g.


Sum (ai) = 0: the sum of the parameters is null. This choice forces the unvarying of the model to be equal to the mean of the dependent variable when the ANOVA is balanced.


Sum (ni.ai) = 0: the sum of the parameters is unfounded This choice forces the constant of the model to be equal to the mean of the needy variable even when the ANOVA is unbalanced.


Note: even if the choice of constraint influences the values of the parameters, it have no effect on the predicted values and on the different fitting information.


Multiple Comparisons Tests: One of the main applications of ANOVA is multiple comparison testing whose aim is to confirm if the parameters for the various categories of a factor differ extensively or not. For example, in the case where four treatments are applied to plants, we want to know not only if the treatments have a significant effect, but also if the treatment have dissimilar effects. Numerous tests have been proposed for compare the means of categories. The majority of these tests assume that the sample is more often than not distributed. DOE++ software provides the main tests counting


Summary of the variables selection: Where a collection method has been chosen, DOE++ software displays the selection summing up. For a stepwise selection, the information corresponding to the similar steps are displayed. Where the best model for a number of variables varying from p to q has been selected, the unsurpassed model for each number or variables is display with the analogous statistics and the best model for the decisive factor chosen is displayed in bold.


Observations: The number of explanation used in the calculations. In the formulas shown below, n is the number of observations.


Sum of weights: The sum of the weights of the observations second-hand in the calculations. In the formula shown below, W is the sum of the weights.


DF: The number of degrees of freedom for the preferred model (corresponding to the error part).


R²: The strength of mind coefficient for the model. This coefficient, whose value is between 0 and 1, is only displayed if the constant of the representation has not been fixed by the user. Its value is defined by:


- , where =


The R² is interpret as the proportion of the variability of the dependent variable explained by the model. The nearer R² is to 1, the better is the model. The predicament with the R² is that it does not take into financial credit the number of variables used to fit the model.


Adjusted R²: The adjusted strength of mind coefficient for the model. The adjusted R² can be negative if the R² is near to zero. This coefficient is only calculated if the constant of the reproduction has not been fixed by the user. Its value is defined by:


The adjusted R² is a correction to the R² which takes into description the number of variables used in the model. The analysis of variance table is used to appraise the explanatory power of the explanatory variables. Where the irregular of the model is not set to a given value, the explanatory power is evaluated by compare the fit (as regards least squares) of the final model with the fit of the elementary model including only a constant equal to the mean of the dependent variable. Where the constant of the model is set, the assessment is made with respect to the model for which the dependent variable is equal to the unvarying which has been set.


The predictions and residuals table shows, for each observation, its weight, the value of the qualitative illustrative variable, if there is only one, the observed value of the dependent variable, the model's prediction, the residuals, the self-confidence intervals together with the fitted prediction and Cook's D if the corresponding options have been activate in the dialog sachet. Two types of confidence interval are displayed: a confidence interval in the order of the mean (analogous to the case where the prediction would be made for an infinite number of observations with a set of given values for the illustrative variables) and an interval around the isolated prediction (analogous to the case of an isolated prediction for the values given for the explanatory variables). The second distance is always greater than the first, the random values being larger.


In this chapter, the detailed overview of response surface methodology is presented. The mathematical background of RSM is presented. The various types of RSM methods like, Box-Benken Design, Multiple regression and ANOVA model are described.



This is Preview only. If you need the solution of this assignment, please send us email with the complete assignment title: ProfessorKamranA@gmail.com