Certificate This is to certify that the dissertation entitled “Automated Face Detection and Facial Expression Recognition System” is a bona fide record of research work done by Narendra Kumar Rajput in the Dept

This is to certify that the dissertation entitled “Automated Face Detection and Facial Expression Recognition System” is a bona fide record of research work done by Narendra Kumar Rajput in the Dept. of Computer Science and Engineering. He carried out the study report in this thesis, independently under my supervision. I also certify that the subject matter of the thesis is the original contribution of the author and that is not formed the basis for the award of any degree or Diploma earlier.

Associate Professor, H.O.D, CSE,
Recognizing and understanding of human emotion is a strongly reasonable method for the interaction among human and machine communication system. The most expressive way to extract and recognize the human emotion is through facial expression analysis. In this dissertation we have used an automatic extracting, understand and recognizing method of facial expression and emotion from input still images using the approximated Bezier curve. To evaluate the performance of the proposed scheme, we perform real implementation and experiment to verify the efficiency of emotional recognition with expressive facial database. The obtained results reveal good performance.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

I would like to thank my mentor for introducing me to this exciting area of Biometry. Her attention to detail, quest for excellence, and love for perfection has inspired to give my best. I am deeply indebted to him for making my M.Tech experience a memorable one. It is due to her faith in me that today I am submitting this thesis. She has always given me her time, encouragement, technical feedback and moral support that I needed during my research. It has been my privilege working with him and learning from him.

Narendra Kumar Rajput
M.Tech (CSE) Scholar
List of Tables
List of Figures
Biometric system
What are Facial Expression
Facial Expression Recognition System
Need of Facial Expression Recognition System
Issues of Facial Expression Recognition System
Motivation behind face based biometric system
Existing systems
Overview of Thesis
2.1 General Framework for Facial Expression
2.1.1 Pre-Processing
2.1.2 Feature Extraction Motion Based Feature Extraction Model Based Feature Extraction Muscle Based Feature Extraction
2.2 Classification
2.3 Summary
3.1 Detection
3.1.1 Detection of eyes and mouth
3.2 Recognition
3.2.1 Skin Filter Segmentation
3.2.2 Big Connect
3.2.3 Bezier Curve
3.3 Training and Recognition of Facial Expression
3.4 Summary
4.1 Experimental Setup
4.2 Results and Discussion
4.3 Summary
5.1 Conclusion
5.2 Future Work
List of Publications

List of Tables
2.1. Assessment of motion based facial expressions
2.2 Assessment of model based facial expression
2.3 Assessment of muscles based facial expressions
2.4 Assessment of hybrid approaches
2.5 Classification Techniques and result
4.1 Personal information of person
4.2 Feature points of person
4.3 Rules of facial expression recognition
4.4: Results of facial expression recognition

List of Figures
2.1 General Framework for Facial Expression
2.2. General frame work for facial expressions
2.3. Distinction of feature extraction and representation.

3.1 Block diagram of proposed methodology
3.2 Detection of eyes
3.3 Detection of mouth
3.4 Skin filter apply on eyes and mouth
3.5 Show result of Big connect
1.1 Background
1.1.1 Biometric system
Biometrics is a rapidly evolving technology, which has been widely used in forensics such as criminal identification, secured access, and prison security. A biometric system is essentially a pattern recognition system that recognizes a person by determining the authenticity of a specific physiological and/or behavioral characteristic possessed by that person. Face is one of the commonly acceptable biometrics used by humans in their visual interaction. The challenges in face recognition system from various issues such as aging, facial expressions, variations in the imaging environment, illumination and pose of the face.

Biometrics is a term that encompasses the application of modern statistical methods to the measurements of biological objects 4. Hence, biometric recognition refers to the use of distinctive physiological and behavioral characteristics (e.g. face, fingerprint, hand geometry, iris, gait, and signature), called biometric identifiers or simply biometrics, for automatically recognizing a person. This has been used in several domains, such as person authorization examination in e-Banking and e-Commerce transactions or within the framework of access controls for security areas. Ideally the biometric characteristics used should satisfy the following properties:
Robustness: This means that the biometric should be sufficiently invariant (Permanence) over a period and thus maintain a low intra-class variability.

Distinctiveness: This indicates that biometric identifiers should differentiate (Uniqueness) any two persons and thus have large inter-class variability.

Availability: Ideally, a biometric identifier should be possessed by every person (Universality).

Accessibility: The characteristic should be easy to acquire (Collectability).

A biometric system involves three aspects: data acquisition and pre-processing, data representation, and decision making. It can thus compare a specific set of physiological or behavioral characteristics extracted from a person with a template/model acquired beforehand and recognize the individual. The digital representation recorded in a database as a description of a physical trait is defined as a template and is obtained by feature extraction algorithms. Among different traits the motivations behind using face and fingerprint for person authentication are manifold.
1.1.2 What are Facial Expressions?
Facial expressions are the facial changes in response to a person’s internal emotional states, intentions, or social communications. According to Fasel and Luttin, facial expressions are temporally deformed facial features such as eye lids, eye brows, nose, lips and skin texture generated by contractions of facial muscles. They observed typical changes of muscular activities to be brief, ?lasting for a few seconds, but rarely more than five seconds or less than 250 Ms. They also point out the important fact that felt emotions are only one source of facial expressions besides others like verbal and non-verbal communication or physiological activities. Though facial expressions obviously are not to equate with emotions, in the computer vision community, the term ?facial expression recognition? often refers to the classification of facial features in one of the six basic emotions: happiness, sadness, fear, disgust, surprise and anger. Facial expressions play an important role in our relations. They can reveal the attention, personality, intention and psychological state of a person. They are interactive signals that can regulate our interactions with the environment and other persons in our vicinity. According to Mehrabian, about 7% of human communication information is communicated by linguistic language (verbal part), 38% by paralanguage (vocal part) and 55% by facial expression. Therefore, facial expressions provide the most important information for emotions perception in face to face communication.

1.1.3 Need of Facial Expression Recognition (FER) system
Automatic facial expression recognition (FER) System is gaining an interest in various application areas like lie detection, neurology, intelligent environments, clinical psychology, behavioral and cognitive sciences and multimodal human computer interface (HCI) . It uses the facial signals as one of the important modality and causes interaction between human and computer in more robust, flexible and natural way. In surveillance system and in intelligent environment, FER is useful in following ways:
1. A real-time automatic surveillance system which detects human faces and facial expressions accurately can be installed at busy public places like malls, airport, railway station or bus station around the world so that it can avoid the possible terrorist attack. The system would detect and record the face and facial expression of each person/ passenger. If there were any faces that appeared to look angry or fearful for a period, the system might set off an internal alarm to warn the security personnel about the suspicious passengers.

2. In a real-time gaming application, a real-time facial expression recognition system can observe players’ facial expressions. If a player shows surprise or excitement, the system would know that the particular part of a game is being highly enjoyed by the player. If a facial expression appeared to be neutral for a period, the system might notify the game to change some of its elements or difficulty levels. This kind of intelligence can enhance playability and interactivity of different types of games.

3. In educational games like a math learning game for elementary school students could tell if the math topic shown on a screen is too difficult based on the facial expression of a student who is playing the game.

4. In driver observation system, a sleepy face of a driver can be traced by the camera and may indicate whether he or she is getting tired while driving. Then the system might set off some warning signals to the driver or be able to help the driver to pull over safely. Such a system might prevent many accidents caused by driving under the influence or fatigue of a driver.

5. In educational institutions, real time facial expression recognition system is useful to detect or record the expressions of the students sitting in a class. A teacher can evaluate himself from the recorded expressions and can modify his methodology of teaching.

1.1.4 Facial expression recognition system
There has been tremendous change in the field of advanced acknowledgment arrangement of the human face. In the ongoing time different strategies have been proposed for acknowledgment of computerized pictures 1-4. In any case, one of the extremely solid capacity of individual is to distinguish and perceive the human appearances and its pictures. Geometrical element-based acknowledgment can be connected when the individual highlights of eyes, mouth and their limits are plainly determined 5-6. It is of most extreme significance to remove the relative area and separation between these point of interest highlights to get legitimate recognizable proof and acknowledgment 7-9. A PC program can likewise do a similar errand with palatable execution, if legitimate facial highlights are given. There is extensive measure of facial highlights exist in human face 10-12. Maybe a couple of them are eyebrow thickness and from the focal point of eye its vertical position, left eyebrow and right eyebrow, nose vertical position and width, state of jaw with limits of face, vertical position of mouth, width of upper and lower lips and its size and broadness. ID of these highlights through a vector of geometrical highlights is having great effect
over higher acknowledgment speed and less memory necessity. We can additionally limit the utilization of memory by upgrading the highlights important to perceive of human countenances. As there are different countenances, different face acknowledgment approaches are there; primarily delegated information-based approach, highlight invariant approach, format-based approach and appearance-based approach. Information construct approach situated considering planning of tenets from the learning base accessible about the face geometry. Most of these guidelines depend on relative separation and area of critical geometrical facial highlights like eyes, nose, eyebrows and face limits. Highlight invariant approach search for the basic highlights of computerized facial pictures. Different highlights are watched and gathered by geometry of face 13-14 and it is vital to choose the arrangement of good highlights. Template based approach works on template matching of single template of face. The matching procedure mostly is correlation based. The appearance-based methods are used for recognition with Eigen face. This method is assumed the human faces as pattern of pixel intensities.

In this exploration work proposed a novel way to deal with perceive human outward appearance utilizing Bezier bend fitting 15. Bezier bend are exceptionally solid for assortment of utilization. Particularly in picture preparing it applies to question acknowledgment, confront acknowledgment, and human stride acknowledgment. It likewise takes a shot at unique mark and another biometric framework acknowledgment. This paper introduces a work to perceive outward appearance of human frontal faces utilizing the approximated Bezier bend. The primary basic highlights of faces like eye, eyebrows, nose, lips, and Face limits are extricated and utilizing least of these highlights confront looks are perceived productively. Despite the fact that there is numerous framework exist 16-18 that uses the Bezier bend to perceive the outward appearance, yet this biometric framework delivers an exact result.

1.1.5 Issues in Facial expression recognition system
Numerous advanced FER frameworks were proposed to perceive few of outward appearances out of seven fundamental outward appearances. These frameworks were not able perceive every one of the seven essential outward appearances. It has additionally been seen that the issue of outward appearance acknowledgment has been completed for the most part based on correlation of other articulation pictures with unbiased pictures. This approach builds the multifaceted nature as far as correlation which backs off the speed of calculation. Likewise, it builds the memory prerequisite of the framework. This approach increases the complexity in terms of comparison which slows down the speed of computation. Also it increases the memory requirement of the system.

Many researchers used PDM/ASM like model-based approaches for feature extraction. But these approaches were suffering from the fact that manual labor is necessary to construct shape models. Many modern FER systems use the appearance-based features extracted using the techniques like LBP, Wavelets, PCA, ICA, EICA, FLDA and achieved recognition accuracy in a moderate range for limited number of images.

.1.2 Motivation
I was motivated to increase the speed of computation, better utilization of memory and to achieve high efficiency in terms of classification and recognition of facial expressions by suggesting some of the modifications in terms of feature extraction, classification and recognition algorithms.

The face is a standout amongst the most satisfactory biometrics, and it has likewise been the most well-known strategy for acknowledgment that human use in their visual communications. The issue with confirmation frameworks in view of unique mark, voice, iris and the latest quality structure (DNA unique finger impression) has been the issue of information securing. For instance, for unique mark the concerned individual should keep his/her finger in legitimate position and introduction and if there should be an occurrence of speaker acknowledgment the mouthpiece ought to be kept in appropriate position and separation from the speaker. In any case, the strategy for acquiring face pictures is nonintrusive and, in this manner, face can be utilized as a biometric attribute for undercover (where client is ignorant that he is being subjected) system. Face is an all-inclusive element of people. Face acknowledgment is imperative not just because of the ability of its parcel of potential applications in investigate fields yet additionally because of the capacity of its answer which would help in tackling other grouping issues like protest acknowledgment.

Existing systems
The Human Facial Expression Detection process is very useful and has the importance for the recent year’s research area, but the process is very difficult for the actual implementation. First of all, we need to find out the images and videos for the input then that will be applied for the feature extraction process in that the inputted face will be divided into the upper region and lower region. Now the features of faces like cheek, chin, wrinkles and the movements of eye brows, lips, eyes etc. will be calculated by calculating the height and width of all the features and then that data will be produced to the classifier and the classifier will test this dataset with the already trained dataset and will produced the results of which face contains which kind of expressions.

Some existing systems:
Automated Facial Expression Recognition System:
Step-1: Video Processing
Step-2: Shape and Appearance Modelling
Step-3: Expression Classification
Step-4: After the Expression Classification the algorithm offers operators with many real time outputs like reporting, trend analysis, snapshots and indicators.

It cannot detect the presence of deception directly which is actually used for further research.

B. Fuzzy rule based facial expression recognition:
Step-1: Input Video
Step-2: Frame Extraction
Step-3: Feature Point Extraction
Step-4: FAP Extraction
Step-5: Fuzzification
Step-6: Expression Detection
C. Recognition of Facial Expression using Principal Component Analysis and Singular Value Decomposition:
Step-1: Images which are inputted, given for the pre-processing
Step-2: Features are extracted and inputted to the classifier
Step-3: Then two images are compared, and the required expression is detected or recognized.

The main disadvantage of this algorithm is if there will be any object on the face like, if a person wearing glasses of if a man has a beard so this algorithm cannot eliminate this kind of objects and have a problem to detect the correct expression.

D. Facial expression recognition using neural network:
Step-1: input image is obtained through webcam
Step-2: optical flow method-based face detection process
Step-3: image pre-processing
Step-4: Principle Component Analysis is performed
Step-5: classification processing using feed forward artificial neural network
It will not work properly in the unconstraint environment.

E. Facial expression recognition using 3-D facial feature distances:
Step-1: Extraction of the characteristics distance vector as defined in table containing six characteristic distances is done.

Step-2: The distance vector classified based on neural network that is trained using back propagation algorithm.

Step-3: A sixth distance is used to normalize the first five distances.

In this algorithm there will be some confusion with the anger class and neutral class, so recognition rate will be less of anger class in this algorithm
The primary goal of this research is to design, implement and evaluate a novel facial expression recognition system using various statistical learning techniques. This goal to be realized through the following points:
1. System level design: In this stage, we’ll be using existing techniques in related areas building blocks to design our system.

A facial expression recognition system usually consists of multiple components, each of which is responsible for one task. We first need to review the literature and decide the overall architecture of our system, i.e., how many modules it has, the responsibility of each of them and how they should cooperate with each other.

Implement and test various techniques for each module and find the best combination by comparing their accuracy, speed, and robustness.

2. Algorithm level design: Focus on the classifier which is the core of a recognition system, trying to design new algorithms which hopefully have better performance compared to existing ones
Organization of Thesis
The rest of the thesis is organized in the following way. Chapter 2: Literature Review This chapter discusses the techniques and recent developments in the field of Facial Expression Recognition System. Chapter 3: Proposed work of Facial Expression Recognition System The technique that I have used to implement the system and discussion about some algorithm used for face detection and expression recognition. Chapter 4: Experimental Results Chapter 5: Conclusion and Future Work.

2.1 Facial Expressions
The Human Facial Expression Recognition is one the most active research area in the field of Human Computer Interaction (HCI), Smart Environments, Medical applications, Artificial intelligent based robotics and automated access control. Recognizing facial expression is a complex task to complete and therefore several limitations are exists such as lightning condition, Age, similar expression type. Ekman and Friesen 20 represent 6 basic face expressions (emotions), show in figure 2.1, which are Happy, Surprise, Disgust, Sad, Angry, Fear. As per Meharabian 21, 55% communicative cues can be judge by facial expression; hence recognition of facial expressions became a major modality. For example, Smart Devices like computer/robots can sense/understand the human’s intension from their expression then it will helpful to the system to assist them by giving suggestions or proposals as per their needs. Automatic facial expression and facial AU (Action Unit) recognition have attracted much attention in the recent years due to its potential applications.

Face Expression approach (As per Anastasioset al. 23) can be divided into three major steps so that the face in an image is known for further processing, facial feature extraction which is the method used to represent the facial expressions and finally classification which is the step that classifies the features extracted in the appropriate expressions. Generally, face is an amalgamation of bones, facial muscles and skin tissues 24. When these muscles contract, deformed facial features are produced 26.

According to Chin and Kim 24 and Ekman and Friesen in 25 facial expression acts as a rapid signal that varies with contraction of facial features like eye brows, lips, eyes, cheeks etc., thereby affecting the recognition accuracy. On the other hand, static (skin color, gender, age etc.,) and slow signals (wrinkles, bulges) do not portray the type of emotion but do affect rapid signal.

The work of facial expression basically started in nineteenth century. In 1872 Darwin 27, introduced an idea that there are definite inherent emotions that are derived from allied habits and are referred to as basic emotions. His idea was based on the assumption that the physiognomies are universal across ethnicities and customs which engross basic emotions like happiness, sadness, fear, disgust, surprise and anger.

The general frame work for automatic facial expression is shown in Figure 2.2 Primarily the face images are acquired and normalized in order to eliminate the complications like pose and illumination factor during face analysis. It is an axiom that feature extraction is a great milestone which uses various techniques to characterize facial features like motion, model, and muscles-based approaches. Finally, these features are classified and trained in different subspaces and then used for recognition.

Fig 2.2. General frame work for facial expressions
Each facial Expression Recognition system consists of three modules:
Feature Extraction
2.1.1 Pre-processing:
Ideally, faces are acquired by locating them from chaotic backgrounds. Actually, pre-processing are applied to normalize images so that it can easily processed in a format in which it can simply process in given condition or scenario 28. For pre-processing images different researchers used different algorithms either used their proposed steps for processing or used already pre-processed images to test the proposed algorithm. Pre-processing is not the necessary steps. Image acquisition is also a part of pre-processing. Ideally, faces are acquired by locating faces from the chaotic backgrounds. Accurately measuring the position of faces plays a vital role in order to extract facial features 29, 30. Active Shape Model (ASM) is a feature orientation method that is used to extort transient and intransient facial features and it was first applied by Steffens et al. 31 however preferred the person spotter system for features extraction. In 1982 Pentland and Essa used view-based and modular Eigen space methods to locate the faces 32, 33. Automatic facial expression is a complex task because the outer appearance of a person may vary by the changed mood of a person and thus subsequently affect the facial expression. These expressions may vary with age, ethnicity, gender and occluding objects like facial hair, cosmetic products, glasses and hair etc. Additionally, pose, lighting conditions, and expression variations also affects face recognition rate. Although, face normalization is not mandatory 34 but used to overcome the harmful effects of illumination and pose variations because facial expression recognition depends on the angle, distance and illumination invariant conditions against each face 35.

2.1.2 Feature Extraction:
Physically face is classified as many local and global features which may change with the change of facial muscles and skin tissues. This alteration causes a serious dilemma in automatic face recognition that downgrades the performance of recognition rate. Before going in to the deep particulars, let’s first take brief view of the classification of faces. 1). Intransient facial features permanently lie on the face, however may be deformed with the change of facial expressions 36. 2). Transient facial features are like creases i.e., wrinkle bulges etc., that effect on skin texture but do not notify the type of emotion 38. Nevertheless, facial expression provides a way of transferring messages, so it is mostly video-based. The quality of video frames depends on the environment from which it is taken and that affected by the lighting and pose conditions 39 for e.g. Cohen et al. Finally, Wang et al. in 40 provides an up-to-date

Fig 2.3. Distinction of feature extraction and representation
survey of image-based approaches till 2009. Transient and Intransient facial features can be categorically divided as to whether they rely on certain actions of muscles or warping of faces and facial features respectively as shown in Figure 2.3. used Tree Augmented-Naïve Bayes (TAN) while Michel et al. in 2003 41, used SVM classifier to track facial features.

Feature vs. Appearance Based Approaches
Basically, face recognition under varying facial expression algorithms is categorized as local or feature based approaches 42 and holistic or appearance-based approaches that process the whole face for extracting facial features and hence give the detailed information. But sometimes, all the information becomes irrelevant because all facial features are not changed by the appearance of single emotion as, for example; the degree of smile doesn’t affect all features but influences only the appearance of cheeks and lips etc. So, in contrast to holistic based approaches local based approaches provide a way to process only the affected facial features 42.

Kakumanu and Bourbakis 43 used local graph in order to track facial features while global graph to storethe information of face texture. Chang et al. 44 initiated the idea of toning the overlapping area around the nose. Similarly, Gundimada and Asari in 45
selected the local facial features via modular kernel Eigen-spaces for multi-dimensional spaces. Feature and appearance-based approaches are further categorized as motion, model, muscles-based and hybrid approaches which provide further distinctions of motion-model based, motion-based image coding and model-muscles based approaches. Motion-Based Approaches:
Table 2.1. Assessment of motion based facial expressions.

Motion-Based Approaches
References Approach Feature Extraction Classifier Performance
Dai et al.2000 46 Appearance based Calculate difference
image from YIQ
image Optical flow projection histogram for each
expression is used to classify features Performance is calculated on
the basis of classification of
Zhang et al.2001 35 Appearance based Using AAMs —– Performance is calculated on
the basis of classification of
facial features Model Based Approaches:
Model based methods are very useful for morphological operations. It is because the automatic face recognition is done to remove noisy data 54. Vretoset al. 48 also exploit the Candide model grid and locate two Eigenvectors of the model vertices using PCA. Most researchers’ focus on the model-based approaches because it gives the concise information of facial features geometry. Sun et al. in 56 attempts to improve the prior work and highlights the limitations across the control point’s vertex in the model-based approaches. The author’s tries to make the vertices superior by means of tracking model and catch the spatial and temporal information, spatiotemporal hidden Markov model (ST-HMM) is used by coupling S-HMM and T-HMM. The overall snapshot of model based facial expression recognition is shown in Table 2.

Table 2.2 Assessment of model based facial expression.

Model Based Facial Expression.

References Approach Feature Extraction Classifier Data Base Performance
Binduet al.

2007 36
1. Discrete
Hopfield Networks for
feature extraction
using PCA based

Action Unit
Coded Facial

accuracy of 85.7%
Martin et al.

2008 47
Using AAM based model
AAM classifier set
instead of MLP
and SVM based

Anger emotion with
average accuracy of
94.9% but other
emotions are low
between 10 to 30%.

Vretoset al.

2009 48
Model vertices are
determined using PCA
SVM based
facial expressions
accuracy achieved
up to 90%
et al. 2005 49
Control Points of the
Candide model actually
determines the transient
Implement PCA +
LDA classifier Use FERET
Database for
neutral &smiling
The expression with
normalized achieves
73.8% results
Bronstein et al.

2007 50
Feature based — —
— —
Expression data
Minimum error =
7.09% Muscles-Based Approaches
The field of muscles-based emotion recognition expanded further in 2004 when Anget al. 2 examined the facial muscles activities for computers to automatically recognize facial emotions. Primarily the emotions of male and female are captured from facial muscles through electromyogram Sensors (EMG) signals and were used to create feature templates. Ibrahim et al. 51 expanded the work of Anget al. in 2006 55 and used surface electromyography (sEMG) to acquire facial muscles actions in different age categories with mean age of 47.5 and 23 years old females. Similarly, in 2008 a research was conducted by Takamiet al. against quasi-muscles to quantify facial expressions by estimating FPs 53. On the other hand, Jayatilakeet al. in 52 tried to restore facial expressions (smile recovery) by exploiting robot mask for paralyze patients. On the whole appraisal against muscles-based approaches are depicted in Table 3.

Table 2.3 Assessment of muscles based facial expressions.

Muscles Based Facial Expressions
References Approach Feature Extraction Classifier Data base Performance
Choeet al
2001 57
Tracking of muscles
contraction via optical capture

Algorithm is
implemented on
PC platform
method provides
superior results
Anget al 2004
Features extract using EMG

Achieves 85 to 94.44%
Takamiet al
2008 53
Displacement of controlled
feature points
— —
— —
Quasi-muscles are
helpful for tracking FPs
Table 2.4 Assessment of hybrid approaches.

Motion-Model Based Approaches
References Approach Feature Extraction Classifier Data base Performance
Hsieh et al
Feature Based
Calculate intraperson
from interperson
overall OF
based classifier
University 3D
database Average recognition rate of
Model-Muscles Based Approaches
Ohtaet al 2000
Feature based
Muscle based
control points
— — Facial parameters like eyebrows,
mouth corners and upper lip
shows effective results.

Tang et al 2003
based Reference and
control points
— VC++/Open GL The more the NURBS flexible
the more it gave the desired
Chin et al 2009
Rubber band

Not based on
3D data base
Surprise achieve 8.3, fear = 5.5,
disgust = 7.2, anger = 8.7,
happiness = 8.0 and sadness =
2.2 Classification
The two basic linear classification techniques are principal component analysis PCA 58, and Linear discriminant analysis LDA 58, Others classifiers are Independent Component analysis ICA, Support vector machine SVM 58, Singular Value decomposition SVD and kernel versions classifiers like KPCA, KLDA, Rank weighted k-nearestneighbours k-NN 63, elastic bunch graph algorithm, AAM 64, Active Shape model ASM, Minimum distance classifier, Back propagation neural network 65 and 3D morph able model based approaches are commonly used. For supplementary aspect Tsai and Jan analyses different subspace model in 66. Table-5 covered list of most recent techniques and their relevant information like Dataset, Accuracy, Conclusion and Future work in any.

Table 2.5 Classification Techniques and result
Sr. No Method/ Technique(s)/ (Database) Result/Accuracy Conclusion
1 Neural Network + Rough Contour Estimation Routine (RCER) 15 (Own Database) 92.1% recognition Rate In this paper, they describe radial basis function network (RBFN) and a multilayer perception (MLP) network.

2 Principal Component Analysis 18 (FACE94) 35% less computation time and 100% recognition Useful where larger database and less computational time
3 PCA + Eigenfaces 19 (CK, JAFFE) 83% Surprise in CK, 83% Happiness in JAFFE, Fear was the most confused expression Compared with the facial expression recognition method based on the video sequence, the one based on the static image is more difficult due to the lack of temporal information.

4 2D Gabor filter 22 (Random Images) 12 Gabor Filter bank used to locate edge Multichannel Gabor filtration scheme used for the detection of salient points and the extraction of texture features for image retrieval applications.

5 Local Gabor Filter + PCA + LDA 23 (JAFFE) Obtained 97.33% recognition rate with the help of PCA+LDA features They conclude that PCA+LDA features partially eliminate sensitivity of illumination.

6 PCA + AAM 24 (Image sequences from FG-NET consortium) 88% for expression recognition from frames and 88 % for the combined recognition.

The computational time and complexity was also very small. Improve the efficiency
7 Gabor + SVM approach HAAR + Adaboost 25 (Cohn-Kanade database) 99.54% for Mouth AU in G+S and 82.81% with H+A. The Haar+Adaboost method achieved comparable accuracy to the Gabor+SVM method for AUs of the eye and brow regions, but it performed very poorly for AUs of the mouth.

8 Dynamic HAAR-like features 26 (CMU expression Database + Own Database) Experiments on the CMU facial expression database and our own facial AU database showed that the proposed method has a promising performance. They extracted dynamical HAAR-like features to capture the temporal information of facial AUs and expressions, and then further coded them into binary pattern features.

9 2D appearance-based local approach + Radial Symmetry Transform 27 (JAFFE) 83% for expressions of happy and surprise and an accuracy of about 78% for expressions of anger and sad. In the face, they use the eyebrow and mouth corners as main „anchor? points. The system, based on a local approach, is able to detect partial occlusions.

10 2D-LDA and SVM 29 (JAFFE) The recognition rate of this method is 95.71% by using leave-one-out strategy and 94.13% by using cross-validation strategy. They investigate various feature representation and expression classification schemes to recognize seven different facial expressions on the JAFFE database. Experimental results show that the proposed system using DWT, 2D-LDA and linear one-again-one SVMs outperforms others.

11 2-D Gabor Filter 30 (Palm print database) 24 verification tests are carried out for testing the 12 sets of parameters on the two databases and their results in table 1 30. They proposed Gabor Filter to extract features in different angle, size and phase offset.

The proposed method for recognition of facial expression and emotion is composed of two major steps: first one is a detecting and analysis of facial area from original input image, and next is a verification of the facial emotion of characteristic features in the region of interest. A block diagram of our proposed method is depicted in Figure 3.1.

Fig. 3.1 Block diagram of proposed methodology
This proposed method is (shown in fig.3.1) First detection then recognition. The methodology is that we will select that part of face which show or drive the expression, unnecessary part can be crop like the area of hair and the background side that decreases the accuracy rate and not having contributed to recognize expressions.

3.1 Detection:
In this step the system detects the face of human. we have to select only that area by which expression can be generated. The algorithm selects only region of interest (ROI). First the face of human is detected, Second the parts of face like eyes, mouth and eyebrows.to do the detection an algorithm canny haaris used. This algo Uses libraryopenCV (contained .XML files) and detect the facial feature like eyes and mouth. OpenCV implements a Viola-Jones detector for face detection, among various other possible detectors. It obtains a high detection rate at the cost of a low rejection rate, giving a high possibility for false-positive (a face detected where there is none) and low for false-negative (face missed).

After the detection we find ROI with rectangular area where further processing will be done. If the detection of face by camera then the camera quality must be good otherwise some features may be lost.

OpenCV comes with a trainer as well as detector. If we want to train our own classifier for any object like car, planes etc. we can use OpenCV to create one. We have used opencvliberary for detection of face eyes and smile. OpenCV already contains many pre-trained classifiers for face, eyes, smile etc. Those XML files are stored in opencv/data/haarcascades/ folder. First, we need to load the required XML classifiers. Then load our input image (or video) in grayscale mode.

For face detection we have to load XML files given below:
String CASCADE_FILE = “E:\fers\haarcascade_frontalface_alt2.xml”;
Similar for eyes and mouth and detection:
String CASCADE_FILE = “E:\fers\haarcascade_eye.xml”;
String CASCADE_FILE = “E:\fers\haarcascade_mcs_mouth.xml”;
3.1.1 Detection of eyes and mouth:
We have to separate the locale of eye and mouth from a still picture utilizing Canny Edge Detection calculation that was produced by John F. Watchful in 1986. The Canny Edge Location calculation was utilized through OpenCv API as a part of which haarcascades record, for eyes and mouth was predefined. By applying this algorithm, a red shading edge is drawn which demonstrate the part of eye and mouth being removed for further applying procedure to it. Figure 3.2 demonstrates the extraction of eyes and Figure 3.3 demonstrates the extraction of mouth.

Fig. 3.2 Detection of eyes

Fig 3.3 Detection of mouth
3.2 Recognition
This includes-
3.2.1 Skin color filter using RGB model
3.2.2 Big connect
3.2.3 Bezier curve
3.2.1 Skin color filter
Detection of eyes and mouth also includes some area of skin near the eyes and mouth to extract exactly only feature we have to apply skin filter.by applying filter the color of skin become change to white color and eyes and mouth become change to black color. See fig .3.4
3.2.2 Skin segmentation algorithm using RGB model:
Step-1: Read the image
Step-2: Separate the color indexed image into its RGB components.

Step-3: Convert the RGB matrices into a grey-scale intensity image.

Step-4: R, G, B is classified as skin if:
R > 95 and G > 40 and B > 20
(Max {R, G, B} – min{R, G, B}) > 15
|R-G| > 15 and R > G and R > B
3.2.3 Big connect
By the skin filter we find black area where we have to further process but there may be also some black part instead of result (eyes, mouth).so to remove that part we use big connect operation. The results of big connect shows a large area of black part covers eyes or mouth and remove other parts. See fig.3.5

Fig.3.4 Skin filter apply on eyes and mouth

Fig 3.5 Show result of Big connect
3.2.3Bezier curve:
The Bezier curve generates contour points considering global shape information with the curve passing through the first and last control points 27. If there are L+1 control points, the position is defined as Pk :(xk,yk);0?k?L
Considering 2D shapes. These coordinate points are then blended to form P(t), which describes the path of Bezier polynomial function between P0 and PL:
P(t) = ?PkBEZk,L(t) (i)
where the Bezier blending function BEZk,L(t) is known as the Bernstein polynomial, which is defined as 26:
BEZ k,L(t) = tk(1-t)L-k (ii)
The recursive formula which is used to decide coordinate locations is obtained by:
BEZk,L(t) = (1-t) _BEZk,L-1(t)
+t _BEZk, L-1+t _BEZk-1, L-1(t) (iii)
Where BEZk,k(t) = tkand BEZ0,k(t) = (1-t)k.

The coordinates of individual Bezier curve are represented by the following pair of parametric equations:
x(t) =?xkBEZk,L(t)
Y(t) =?ykBEZk,L(t) (iv)
An example of a Bezier curve is given in Fig with 4 control points 25.

For applying the Bezier curve, we need to extract some control points of each interest regions, where locate in the area of left eye, right eye and mouth. Thus, we apply big connect for finding the highest connected area within each interest regions from the eye map and mouth map. Then, we find four boundary points of the regions, which are the starting and ending pixels in horizontals, and the top and bottom pixels of the central points in verticals. After getting four boundary points of each region, the Bezier curves for left eye, right eye and mouth are obtained by drawing tangents to a curve over the four boundary control points.

3.4 Training and Recognizing Facial Emotion:
In training database, there are two tables which are storing for personal information and indexes of four emotions with his/her own curves of facial expression analysis. For detection of facial emotion, we need to compute the distance a one-to-one correspondence of each interest regions between an input image and the images in the database. The Bezier curves are drawn over principal lines of facial features. To estimate a similarity matching, we first normalize the displacements that convert each width of the Bezier curve to 100 and height according to its width.

4.1 Experimental setup:
The algorithm presented in the previous section is implemented with Java, and experiments are performed on Intel Core 2 Duo CPU, i.e. 2.00GHz PC with 2 GB RAM, as shown in Figure 8. The experiment shows the recognition results under different facial expressions such as smile, sad, surprise and normal. The proposed method gives successful emotion recognition of ratio 70%. Experimental result tells that success ratio is better for smile, because the control points and curves for smile are more perfect. Grayscale images from the database are tough to be detected since the skin color pattern of facial region cannot be filtered in the image.

4.2 Results and Discussion:
The expressions smile, sad, surprise and normal are considered in the implementation of the testing of face recognition. The faces with expressions are compared against the training face database. The training face database consists of two table which are storing for personal information and indexes of four emotions with his/her own curves of facial expression analysis. The Bezier points are interpolated over the principal lines of facial features. These points for each curve form the adjacent curve segments. The distance is calculated based on the curve segments. Then, understanding and decision of facial emotion is chosen by measuring similarity in faces. The table 4.1 represents personal information of person.

Table 4.1 Personal information of person

Table 4.2 feature point of person stores the Bezier points of detected eyes and mouth feature. According to these Bezier points expressions to be recognized.

Table 4.2 Feature points of person

To categorize facial emotion, firstly we need to determine the expressions, i.e. Detection and Recognition of Facial. All images are colorful. Figure 5 shows sample facial images which expresses an emotion, for example, by neutral and smiling expression. The Bezier point interpolated according to the rules define in table 4.3.

Table 4.3 Rules of facial expression recognition
Emotion Movement of AUs
Smile Eye opening is narrowed, Mouth is opening, and Lip corners are pulled obliquely
Surprise Eye is tightened closed, and Lower lip corner depressor
Sad Eye and Mouth are opened, Upper eyelid raiser, and Mouth stretch
Normal Eye opened, mouth closed
The experiments show the recognition results under different facial expressions such as smile, sad, surprise and neutral. The proposed method recognizes 197 of the 250 faces, which means that a successful recognition ratio of 70% is achieved, as shown in Table 2. Experimental result reveals that success ratio is better for smile, because the control points and curves for smile are given more appreciable changes from neutral expression. The result is very good for training images on the trained images it gives 99% accurate result.

Table 4.4: Results of facial expression recognition
Expressions Correct/misses Success Ratio
Smile 67/12 84.8%
Sad 22/8 73.3%
Surprise 45/16 78.3%
Normal 63/17 73.8%
Total 197/53 78.8%
5.1 Conclusion:
In this paper, we have presented and implemented a simple approach for recognition of the facial expression analysis. The calculation is performed two noteworthy advances: one is a detection of facial district with skin shading division and estimation of highlight delineate extricating two intrigue locales concentrated on eye and mouth. What’s more, the other is a confirmation of the facial feeling of trademark highlights with the Bezier bend and the Hausdorff separate. Exploratory outcomes demonstrate normal fruitful proportion of 78.8%to perceive the outward appearance, and this shows the great execution and enough to relevant to cell phones.

5.2 Future Work:
Human emotion can be detected easily using facial expression, but it is little bit difficult to identify using computer interface. This challenge motivates researchers to make an automated system to recognize human emotions in a very effective and reliable manner. An ideal expression detection system should identify the emotion detection of any age and gender. Such a system should also be invariant to different distraction like glasses, different hair styles, moustache, facial hairs and different lightening conditions 67. Such type of system should detect correct facial expression analysis regardless of large changes in viewing condition and rigid movement 67. Achieving optimal feature extraction and classification is a key challenge in this field because we have a huge variability in the input images. For better acknowledgment result, most facial expressions recognition methods, require more work to control imaging conditions like position and introduction of the face as for the camera. In this thesis various challenges and methods related to emotion detection are reviewed. These techniques focus on many factors namely face detection, lip detection, eye detection and various emotions detection. These recognition systems provide best result for some sets of data. In future we will to come up with a system for easier detection of human emotions using better segmentation methods and wide range of classifiers like neural network, support vector machines etc.: to simplify the process and give results as accurately as possible.

1 Intelligent local face recognition-adnan khashman, “Recent Advances in Face Recognition, Book edited by: KresimirDelac, MislavGrgic and Marian Stewart Bartlett, ISBN 978-953-7619-34-3, pp. 236, December 2008, I-Tech, Vienna, Austria.

2 S. Lawrence, A. C. Tsoi and A. D. Back, “Face Recognition: A Convolutional Neural-Network Approach”, IEEE TRANSACTIONS ON NEURAL NETWORKS, vol. 8, no. 1, (1997) January, pp. 98-113.

3 T. H. Le and L. Bu, “Face Recognition Based on SVM and 2DPCA”, International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 4, no. 3, (2011) September, pp. 85-94.

4 M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces”, 2007., IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR ’91, (2011) pp. 586-591.

5 A. J. Goldstein, L. D. Harman and A. B. Lesk, “Identification of human faces”, Proc. IEEE, vol. 59, (2010), p. 748
6 M. S. Bartlett, Javier R. Movellan, and Terrence J. Sejnowski, “Face recognition by independent component analysis”, Neural Networks, IEEE Transactions on, vol. 13, no. 6, (2002), pp. 1450-1464.

7 P. S. Aleksic and A. K. Katsaggelos, “Automatic Facial Expression Recognition using Facial Animation Parameters and Multi Stream HMM” IEEE Transactions on Information Forensic and Security, vol. 1, no. 1, (2006), pp. 3-11.

8 K. Anderson and P. W. McOwan, “A Real-Time Automated System for the Recognition of Human Facial Expressions” IEEE Transactions on Systems, Man and Cybernetics—Part B: Cybernetics, vol. 36, no. 1, (2006) February, pp. 96-105.

9 D. Terzopoulos and K. Waters, “Analysis of facial images using physical and anatomical models”, Proc. 3rd Int. Conf. on Computer Vision, (1990), pp. 727-732.

10 H. Gu, G. Su and C. Du, “Feature Points Extraction from Faces”, Proc. Image and Vision Computing, (2003) November, pp. 154-158.

11 R. Brunelli, “Face Recognition: Features versus Templates”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, (1993), pp. 1042-1052.

12 P. Campadelli, R. Lanzarotti and G. Lipori, “Automatic Facial Feature Extraction for Face Recognition”, ItechEdn and Publishing Face Recognition, Chapter 3, (2007) pp. 31-58.

13 G. Chow and X. Li, “A System for Automatic Facial Feature Extraction”, Pattern and Expression Recognition, vol. 22, no. 16, (1999), pp. 1739-1755.

14 G. Chow and X. Li, “Towards a system for automatic facial feature detection”, Pattern Recognition, vol. 26, no. 12, December (1993), pp. 1739–1755.

15 B´ezier curve. Wikipedia, 2013. http://en.wikipedia.org/wiki/Bezier_curve.

16 M. R. Rahman, M. A. Ali, and G. Sorwar. Finding Significant Points for Parametric Curve Generation Techniques. Journal of Advanced Computations, 2(2), 2008.

17 Seyed Mehdi Lajevardi, Zahir M. Hussai “Local Feature Extraction method for Facial expression recognition”, 17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009.

18 Abhishek Singh and Saurabh Kumar “Face Recognition using Principal component analysis and Eigen Face approach”.

19 Bartlett, M. S., Donato, G., Ekman, P., Hager, J. C., Sejnowski, T.J., 1999,”Classifying Facial Actions”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, pp. 974-989.

20 Ekman, P, Friesen, “Constants across Cultures in the Face and Emotion”, J. Pers. Psycho. WV, 1971, vol. 17, no. 2, pp. 124-129.

21 Cohn, J.F., Kanade, T., Lien, J.J., 1998,”Automated Facial Expression Recognition Based on FACS Action Units”, Proc. Third EEE Int. Conf. Automatic Face and Gesture Recognition, pp. 390-395.

22 JAFFE Dataset “Japanese Female Facial Expression Database”.

23 Anastasios C. Koutlas, Dimitrios I. Fotiadis “A Region Based Methodology for Facial Expression Recognition”
24 Chin S. and Kim K., “Emotional Intensity-Based Facial Expression Cloning for Low Polygonal Applications,” IEEE Transaction on Systems,man, and Cybernetics-Part C: Applications andReviews, vol. 39, no. 3, pp. 315-330, 2009.

25 Ekman P. and Friesen W., Unmasking the Face A Guide to Recognizing Emotions from Facial Clues, Cambridge MA Malor Books, CA, 2003.

26 Fasel B. and Luettin J., “Automatic Facial Expression Analysis: a Survey,” IEEE PatternRecognition, vol. 36, no. 1, pp. 259-275, 2003.

27 Darwin C., The Expression of the Emotions in Man and Animals, John Murray, London, 1872.

28 Talbi H., Draa A., and Batouche M., “A Novel Quantum-Inspired Evaluation Algorithm for Multi-Source Affine Image Registration,” TheInternational Arab Journal of InformationTechnology, vol. 3, no. 1, pp. 9-16, 2006.

29 Salman N., “Image Segmentation and Edge Detection Based on Chan-Vese Algorithm,” TheInternational Arab Journal of InformationTechnology, vol. 3, no. 3, pp. 69-74, 2006.

30 Shahab W., Otum H., and Ghoul F., “A Modified 2D Chain Code Algorithm for Object Segmentation and Contour Tracing,” TheInternational Arab Journal of InformationTechnology, vol. 6, no. 3, pp. 250-233, 2009.

31 Steffens J., Elagin E., Neven H., “Person Spotter-Fast and Robust System for Human Detection, Tracking and Recognition,” inProceedings of the 2nd International Conferenceon Face and Gesture Recognition, pp. 516-521, 1998.

32 Essa I. and Pentland A., “Coding, Analysis, Interpretation and Recognition of Facial Expressions,” IEEE Transactions PatternAnalysis and Machine Intelligence, vol. 19, no. 7, pp. 757-763, 1997.

33 Pentland A., Moghaddam B., and Starner T., “View-Based and Modular Eigenspaces for Face Recognition,” in Proceedings of IEEE Conferenceon Computer Vision and Pattern Recognition, pp. 84-91, 1994.

34 Fasel B. and Luettin J., “Automatic Facial Expression Analysis: a Survey,” IEEE PatternRecognition, vol. 36, no. 1, pp. 259-275, 2003.

35 Zhang Y. and Mart?nez A., “Recognition of Expression Variant Faces Using Weighted Subspaces,” in Proceedings of the 17thInternational Conference on Pattern Recognition, vol. 3, pp. 149-152, 2004.

36 Bindu M., Gupta P., and Tiwary U., “Cognitive Model-Based Emotion Recognition from Facial Expressions for Live Human Computer Interaction,” in Proceedings of the IEEE Symposium on Computational Intelligence in Image and Signal Processing, Honolulu, pp. 351-356, 2007.

38 Ghanem K., Caplier A., and Kholladi M., “Contribution of Facial Transient Features in Facial Expression Analysis: Classification ;Quantification,” Journal of Theoretical and Applied Information Technology, vol. 28, no. 1, pp. 135-139, 2010.

39 Mitra S. and Acharya T., “Gesture Recognition: A Survey,” IEEE Transaction on Systems, Man, and Cybernetics-Part C: Applications and Reviews, vol. 37, no. 3, pp. 311-324, 2007.

40 Michel P. and Kaliouby R., “Real Time Facial Expression Recognition in Video using Support Vector Machines,” in Proceedings of the 5thInternational Conference on Multimodal Interfaces, USA, pp. 258-264, 2003.

41 Gesu V., Zavidovique B., and Tabacchi M., “Face Expression Recognition through Broken Symmetries,” in Proceedings of the 6th IndianConference on Computer Vision, Graphics &Image Processing, pp. 714-721, 2008.

42 Gizatdinova Y. and Surakka V., “Feature-Based Detection of Facial Landmarks from Neutral andExpressive Facial Images,” IEEE Transaction onPattern Analysis and Machine Intelligence, vol.28, no. 1, pp. 135-139, 2006.

43 Kakumanu P. and Bourbakis N., “A Local- Global Graph Approach for Facial Expression Recognition,” in Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence, Arlington, pp. 685-692, 2006.

44 Chang K., Bowyer K., and Flynn P., “Multiple Nose Region Matching for 3D Face Recognitionunder Varying Facial Expression,” IEEE Transaction on Pattern Analysis & MachineIntelligence, vol. 28, no. 10, pp. 1695-1700, 2006.

45 Gundimada S. and Asari V., “Facial RecognitionUsing Multisensor Images Based on LocalizedKernel Eigen Spaces,” IEEE Transaction onImage Processing, vol. 18, no. 6, pp. 1314-1325,2009.

46 Dai Y., Shibata Y., Hashimoto K., Ishii T., OsuoA., Katamachi K., Nokuchi K., Kakizaki N., andCai D., “Facial Expression Recognition ofPerson without Language Ability Based on theOptical Flow Histogram,” in Proceedings of the5th International Conference on SignalProcessing, Beijing, vol. 2, pp. 1209-1212,2000.

47 Martin C., Werner U., and Gross H. “A Real- Time Facial Expression Recognition System based on Active Appearance Models using Gray Images and Edge Images,” in Proceedings of the8th IEEE International Conference on AutomaticFace & Gesture Recognition, pp. 1-6, 2008.

48 Vretos N., Nikolaidis N., and Pitas I., “A Model- Based Facial Expression Recognition Algorithm using Principal Components Analysis,” inProceedings of the 16th IEEE InternationalConference on image Processing, Cairo, pp.3301-3304, 2009.

49 Ramachandran M., Zhou S., Jhalani D., and Chellappa R., “A Method for Converting a Smiling Face to a Neutral face with Applications to Face Recognition,” in Proceedings of IEEEInternational Conference on Acoustics, Speech,and Signal Processing, vol. 2, pp. 977-980, 2005.

50 Bronstein A., Bronstein M., and Kimmel R., “Expression-Invariant Representations of Faces,” IEEE Transaction on Image Processing, vol. 16, no. 1, pp. 188-197, 2007.

51 Ibrahim F., Chae J., Arifin N., Zarmani N., and Cho J., “EMG Analysis of Facial Muscles Exercise using Oral Cavity Rehabilitative Device,” in Proceedings of IEEE Region 10Conference TENCON, Hong Kong, pp. 1-4, 2006.

52 Jayatilake D., Gruebler A., and Suzuki K., “An Analysis of Facial Morphology for the Robot Assisted Smile Recovery,” in Proceedings of the4th International Conference on Information andAutomation for Sustainability, pp. 395-400, 2008.

53 Takami A., Ito K., and Nishida S., “A Method for Quantifying Facial Muscle Movements In the Smile During Facial Expression Training,” inProceedings of IEEE International Conferenceon Systems, Man and Cybernetics, Singapore, pp. 1153-1157, 2008.

54 Amberg B., Knothe R., and Vetter T.,”Expression Invariant 3D Face Recognition with a Morphable Model,” in Proceedings of the 8thIEEE International Conference on AutomaticFace and Gesture Recognition, Amsterdam, pp. 1-6, 2008.

55 Ang L., Belen E., Bernardo R., Boongaling E., Briones G., and Corone J., “Facial Expression Recognition through Pattern Analysis of Facial Muscle Movements Utilizing Electromyogram Sensors,” in Proceedings of IEEE TENCON, vol. 3, pp. 600-603, 2004.

56 Sun Y., Chen X., Rosato M., and Yin L.,”Tracking Vertex Flow and Model Adaptation for Three-Dimensional Spatiotemporal Face Analysis,” IEEE Transaction on Systems, Man,and Cybernetics-Part A: Systems and Humans, vol. 40, no. 3, pp. 461-474, 2010.

57 Choe B. and Ko H., “Analysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis,” in Proceedings of the 14thIEEE Conference on Computer Animation, Seoul, pp. 12-19, 2001.

58 Chin S. and Kim K., “Emotional Intensity-Based Facial Expression Cloning for Low Polygonal Applications,” IEEE Transaction on Systems,man, and Cybernetics-Part C: Applications andReviews, vol. 39, no. 3, pp. 315-330, 2009.

59 Hsieh C., Lai S., and Chen Y., “An Optical Flow- Based Approach to Robust Face Recognition under Expression Variations,” IEEE Transactionon Image Processing, vol. 19, no. 1, pp. 233-240, 2010.

60 Ohta H., Saji H., and Nakatani H., “Muscle-Based Feature Models for Analyzing Facial Expressions,” in Proceedings of Computer vision,Lecture Notes in Computer Science, vol. 1352/1997, pp. 711-718, 1997.

61 Tang S., Yan H., and Liew A., “A Nurbs-Based Vector Muscle Model for Generating Human Facial Expressions,” in Proceedings of the 4thInternational Conference on Information, Communications and Signal Processing Pacific Rim Conference on Multimedia, Singapore, vol. 2, pp. 758-762, 2003.

62 Chin S. and Kim K., “Emotional Intensity-Based Facial Expression Cloning for Low Polygonal Applications,” IEEE Transaction on Systems,man, and Cybernetics-Part C: Applications andReviews, vol. 39, no. 3, pp. 315-330, 2009.

63 Hsieh C., Lai S., and Chen Y., “An Optical Flow- Based Approach to Robust Face Recognition under Expression Variations,” IEEE Transactionon Image Processing, vol. 19, no. 1, pp. 233-240, 2010.

64 Mehrabian, A.” Communication without words”, Psychology Today, vol.2, no.4, pp.53–56,1968.

65 Jun C., liang W., Guang X., and Jiang X., “Facial Expression Recognition based on Wavelet Energy Distribution features and Neural Network Ensemble,” in Proceedings of GlobalCongress on Intelligent Systems, Xiamen, vol. 2, pp. 122-126, 2009.

66 Tsai P. and Jan T., “Expression-Invariant Face Recognition System using subspace Model Analysis,” IEEE International Conference onSystems, Man and Cybernetics, vol. 2, no. 1, pp. 1712-1717, 2005.

67 M. Pantic, L.J.M. Rothkrantz. 2004.Facial action recognition for facial expression analysis from static face images. IEEE Transactions on Systems, Man, and Cybernetics 34, (2004) 1449–1461M. Young.